286 research outputs found

    A Classification of BPEL Extensions

    Get PDF
    The Business Process Execution Language (BPEL) has emerged as de-facto standard for business processes implementation. This language is designed to be extensible for including additional valuable features in a standardized manner. There are a number of BPEL extensions available. They are, however, neither classified nor evaluated with respect to their compliance to the BPEL standard. This article fills this gap by providing a framework for classifying BPEL extensions, a classification of existing extensions, and a guideline for designing BPEL extensions

    Using ontology and semantic web services to support modeling in systems biology

    Get PDF
    This thesis addresses the problem of collaboration among experimental biologists and modelers in the study of systems biology by using ontology and Semantic Web Services techniques. Modeling in systems biology is concerned with using experimental information and mathematical methods to build quantitative models across different biological scales. This requires interoperation among various knowledge sources and services. Ontology and Semantic Web Services potentially provide an infrastructure to meet this requirement. In our study, we propose an ontology-centered framework within the Semantic Web infrastructure that aims at standardizing various areas of knowledge involved in the biological modeling processes. In this framework, first we specify an ontology-based meta-model for building biological models. This meta-model supports using shared biological ontologies to annotate biological entities in the models, allows semantic queries and automatic discoveries, enables easy model reuse and composition, and serves as a basis to embed external knowledge. We also develop means of transforming biological data sources and data analysis methods into Web Services. These Web Services can then be composed together to perform parameterization in biological modeling. The knowledge of decision-making and workflow of parameterization processes are then recorded by the semantic descriptions of these Web Services, and embedded in model instances built on our proposed meta-model. We use three cases of biological modeling to evaluate our framework. By examining our ontology-centered framework in practice, we conclude that by using ontology to represent biological models and using Semantic Web Services to standardize knowledge components in modeling processes, greater capabilities of knowledge sharing, reuse and collaboration can be achieved. We also conclude that ontology-based biological models with formal semantics are essential to standardize knowledge in compliance with the Semantic Web vision

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Improving Reuse of Distributed Transaction Software with Transaction-Aware Aspects

    Get PDF
    Implementing crosscutting concerns for transactions is difficult, even using Aspect-Oriented Programming Languages (AOPLs) such as AspectJ. Many of these challenges arise because the context of a transaction-related crosscutting concern consists of loosely-coupled abstractions like dynamically-generated identifiers, timestamps, and tentative value sets of distributed resources. Current AOPLs do not provide joinpoints and pointcuts for weaving advice into high-level abstractions or contexts, like transaction contexts. Other challenges stem from the essential complexity in the nature of the data, operations on the data, or the volume of data, and accidental complexity comes from the way that the problem is being solved, even using common transaction frameworks. This dissertation describes an extension to AspectJ, called TransJ, with which developers can implement transaction-related crosscutting concerns in cohesive and loosely-coupled aspects. It also presents a preliminary experiment that provides evidence of improvement in reusability without sacrificing the performance of applications requiring essential transactions. This empirical study is conducted using the extended-quality model for transactional application to define measurements on the transaction software systems. This quality model defines three goals: the first relates to code quality (in terms of its reusability); the second to software performance; and the third concerns software development efficiency. Results from this study show that TransJ can improve the reusability while maintaining performance of TransJ applications requiring transaction for all eight areas addressed by the hypotheses: better encapsulation and separation of concern; loose Coupling, higher-cohesion and less tangling; improving obliviousness; preserving the software efficiency; improving extensibility; and hasten the development process

    Fostering Distributed Business Logic in Open Collaborative Networks: an integrated approach based on semantic and swarm coordination

    Get PDF
    Given the great opportunities provided by Open Collaborative Networks (OCNs), their success depends on the effective integration of composite business logic at all stages. However, a dilemma between cooperation and competition is often found in environments where the access to business knowledge can provide absolute advantages over the competition. Indeed, although it is apparent that business logic should be automated for an effective integration, chain participants at all segments are often highly protective of their own knowledge. In this paper, we propose a solution to this problem by outlining a novel approach with a supporting architectural view. In our approach, business rules are modeled via semantic web and their execution is coordinated by a workflow model. Each company’s rule can be kept as private, and the business rules can be combined together to achieve goals with defined interdependencies and responsibilities in the workflow. The use of a workflow model allows assembling business facts together while protecting data source. We propose a privacy-preserving perturbation technique which is based on digital stigmergy. Stigmergy is a processing schema based on the principle of self-aggregation of marks produced by data. Stigmergy allows protecting data privacy, because only marks are involved in aggregation, in place of actual data values, without explicit data modeling. This paper discusses the proposed approach and examines its characteristics through actual scenarios

    A framework for context-aware sensor fusion

    Get PDF
    Mención Internacional en el título de doctorSensor fusion is a mature but very active research field, included in the more general discipline of information fusion. It studies how to combine data coming from different sensors, in such way that the resulting information is better in some sense –more complete, accurate or stable– than any of the original sources used individually. Context is defined as everything that constraints or affects the process of solving a problem, without being part of the problem or the solution itself. Over the last years, the scientific community has shown a remarkable interest in the potential of exploiting this context information for building smarter systems that can make a better use of the available information. Traditional sensor fusion systems are based in fixed processing schemes over a predefined set of sensors, where both the employed algorithms and domain are assumed to remain unchanged over time. Nowadays, affordable mobile and embedded systems have a high sensory, computational and communication capabilities, making them a perfect base for building sensor fusion applications. This fact represents an opportunity to explore fusion system that are bigger and more complex, but pose the challenge of offering optimal performance under changing and unexpected circumstances. This thesis proposes a framework supporting the creation of sensor fusion systems with self-adaptive capabilities, where context information plays a crucial role. These two aspects have never been integrated in a common approach for solving the sensor fusion problem before. The proposal includes a preliminary theoretical analysis of both problem aspects, the design of a generic architecture capable for hosting any type of centralized sensor fusion application, and a description of the process to be followed for applying the architecture in order to solve a sensor fusion problem. The experimental section shows how to apply this thesis’ proposal, step by step, for creating a context-aware sensor fusion system with self-adaptive capabilities. This process is illustrated for two different domains: a maritime/coastal surveillance application, and ground vehicle navigation in urban environment. Obtained results demonstrate the viability and validity of the implemented prototypes, as well as the benefit of including context information to enhance sensor fusion processes.La fusión de sensores es un campo de investigación maduro pero no por ello menos activo, que se engloba dentro de la disciplina más amplia de la fusión de información. Su papel consiste en mezclar información de dispositivos sensores para proporcionar un resultado que mejora en algún aspecto –completitud, precisión, estabilidad– al que se puede obtener de las diversas fuentes por separado. Definimos contexto como todo aquello que restringe o afecta el proceso de resolución de un problema, sin ser parte del problema o de su solución. En los últimos años, la comunidad científica ha demostrado un gran interés en el potencial que ofrece el contexto para construir sistemas más inteligentes, capaces de hacer un mejor uso de la información disponible. Por otro lado, el desarrollo de sistemas de fusión de sensores ha respondido tradicionalmente a esquemas de procesado poco flexibles sobre un conjunto prefijado de sensores, donde los algoritmos y el dominio de problema permanecen inalterados con el paso del tiempo. En la actualidad, el abaratamiento de dispositivos móviles y embebidos con gran capacidad sensorial, de comunicación y de procesado plantea nuevas oportunidades. La comunidad científica comienza a explorar la creación de sistemas con mayor grado de complejidad y autonomía, que sean capaces de adaptarse a circunstancias inesperadas y ofrecer un rendimiento óptimo en cada caso. En esta tesis se propone un framework que permite crear sistemas de fusión de sensores con capacidad de auto-adaptación, donde la información contextual juega un papel fundamental. Hasta la fecha, ambos aspectos no han sido integrados en un enfoque conjunto. La propuesta incluye un análisis teórico de ambos aspectos del problema, el diseño de una arquitectura genérica capaz de dar cabida a cualquier aplicación de fusión de sensores centralizada, y la descripción del proceso a seguir para aplicar dicha arquitectura a cualquier problema de fusión de sensores. En la sección experimental se demuestra cómo aplicar nuestra propuesta, paso por paso, para crear un sistema de fusión de sensores adaptable y sensible al contexto. Este proceso de diseño se ilustra sobre dos problemas pertenecientes a dominios tan distintos como la vigilancia costera y la navegación de vehículos en entornos urbanos. El análisis de resultados incluye experimentos concretos que demuestran la validez de los prototipos implementados, así como el beneficio de usar información contextual para mejorar los procesos de fusión de sensores.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Javier Bajo Pérez.- Secretario: Antonio Berlanga de Jesús.- Vocal: Lauro Snidar

    Performance Assessment Strategies

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis

    Performance Assessment Strategies:

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis

    Performance Assessment Strategies:

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis
    • …
    corecore