33 research outputs found

    Pervasive computing reference architecture from a software engineering perspective (PervCompRA-SE)

    Get PDF
    Pervasive computing (PervComp) is one of the most challenging research topics nowadays. Its complexity exceeds the outdated main frame and client-server computation models. Its systems are highly volatile, mobile, and resource-limited ones that stream a lot of data from different sensors. In spite of these challenges, it entails, by default, a lengthy list of desired quality features like context sensitivity, adaptable behavior, concurrency, service omnipresence, and invisibility. Fortunately, the device manufacturers improved the enabling technology, such as sensors, network bandwidth, and batteries to pave the road for pervasive systems with high capabilities. On the other hand, this domain area has gained an enormous amount of attention from researchers ever since it was first introduced in the early 90s of the last century. Yet, they are still classified as visionary systems that are expected to be woven into people’s daily lives. At present, PervComp systems still have no unified architecture, have limited scope of context-sensitivity and adaptability, and many essential quality features are insufficiently addressed in PervComp architectures. The reference architecture (RA) that we called (PervCompRA-SE) in this research, provides solutions for these problems by providing a comprehensive and innovative pair of business and technical architectural reference models. Both models were based on deep analytical activities and were evaluated using different qualitative and quantitative methods. In this thesis we surveyed a wide range of research projects in PervComp in various subdomain areas to specify our methodological approach and identify the quality features in the PervComp domain that are most commonly found in these areas. It presented a novice approach that utilizes theories from sociology, psychology, and process engineering. The thesis analyzed the business and architectural problems in two separate chapters covering the business reference architecture (BRA) and the technical reference architecture (TRA). The solutions for these problems were introduced also in the BRA and TRA chapters. We devised an associated comprehensive ontology with semantic meanings and measurement scales. Both the BRA and TRA were validated throughout the course of research work and evaluated as whole using traceability, benchmark, survey, and simulation methods. The thesis introduces a new reference architecture in the PervComp domain which was developed using a novel requirements engineering method. It also introduces a novel statistical method for tradeoff analysis and conflict resolution between the requirements. The adaptation of the activity theory, human perception theory and process re-engineering methods to develop the BRA and the TRA proved to be very successful. Our approach to reuse the ontological dictionary to monitor the system performance was also innovative. Finally, the thesis evaluation methods represent a role model for researchers on how to use both qualitative and quantitative methods to evaluate a reference architecture. Our results show that the requirements engineering process along with the trade-off analysis were very important to deliver the PervCompRA-SE. We discovered that the invisibility feature, which was one of the envisioned quality features for the PervComp, is demolished and that the qualitative evaluation methods were just as important as the quantitative evaluation methods in order to recognize the overall quality of the RA by machines as well as by human beings

    Extended use of SOA and Cloud Computing Security Gateway Protocol for Big Data analytic sessions

    Get PDF
    The advent of Cloud computing and Big Data has introduced a paradigm shift in the area of Information Technology. The Cloud security is lagging behind the evolution of Cloud computing; this lag requires further research. The adaptation of Cloud by the businesses results in the use of VPN and SANs. In this new paradigm the computing is conducted in the Cloud rather onsite. This needs a security protocol, since the processing of Big Data is simultaneously massive and vulnerable. The utilization of Cloud and Big Data has introduced gaps in terms of standard business processes as well as data security, while the data is being processed using the concept of MapReduce. The lag of open source security is the problem area which is dealt with in this doctoral thesis to provide a security gateway protocol for any organization or entity to tailor according to their environmental constraints. All of the major software solution providers, such as Microsoft, Oracle, SAP etc., have their own closed source security protocols available for the organizations to use. There are several versions of the open source Kerberos also available to be used in providing security for Big Data processing within Cloud using commodity hardware. Another famous open source security gateway is the Apache Knox Gateway. However it only provides a single access point for all REST interactions with Hadoop clusters. There has been a need of an open source security protocol that organizations can customize according to their needs, which is not bound of using only REST interactions. The research presented in this thesis provides such an Open source solution for the industry. The provided security gateway utilizes an extended use of SOA by adapting Achievable Service Oriented Architecture (ASOA). Since the use of Information Technology has been significantly altered after the emergence of the Cloud, there is also a need for the organization using legacy technology to transition from current business processing to processing of the Big Data within the Cloud in a secured way. This thesis builds security gateway protocol upon the SOA using ASOA as the base methodology. It also introduces a Master Observer Service (MOS), which uses Messaging Secured Service (MSS) as an added capability to strengthen the idea of secured data availability for MapReduce processing to deal with Big Data. The thesis presents an actual implementation using Business Process Engineering (BPR). The tailored implementation of the Security Gateway Protocol has been implemented in one of the fortune one hundred financial institutions using Master Observer Service. This allows the institution to process their Big Data using MapReduce in secured sessions using Cloud. L'avènement du Cloud Computing et du Big Data a introduit un changement de paradigme dans le domaine des technologies de l'information. La sécurité Cloud est à la traîne de l'évolution du Cloud Computing (CC); ce décalage nécessite de plus amples recherches. L'adaptation du Cloud par les entreprises se traduit par l'utilisation de réseaux privés virtuels (VPN) et des réseaux de stockage VPN. Dans ce nouveau paradigme du CC, des calculs sont effectués dans le Cloud plutôt que sur place. Ceci nécessite d'un protocole de sécurité, puisque le traitement de Big Data est simultanément massif et vulnérable. L'utilisation de Cloud et Big Data a introduit des lacunes en termes de processus d'affaires standard, ainsi que de la sécurité des données, tandis que les données sont traitées en utilisant le concept de MapReduce. Le décalage de la sécurité open source est la zone à problème qui est traité dans cette thèse de doctorat en proposant un protocole de sécurité de passerelle du CC pour toute organisation ou entité en fonction de leurs contraintes environnementales. Tous les principaux fournisseurs de solutions de logiciels, tels que Microsoft, Oracle, SAP, etc., ont leurs propres protocoles de sécurité, disponibles pour les organisations à utiliser sans code source. Il existe également plusieurs versions de l'open source Kerberos disponibles à être utilisées afin d’assurer la sécurité pour le traitement des Big Data. Une autre passerelle de sécurité open source est celle de l’Apache Knox. Toutefois, il ne fournit qu'un point d'accès unique pour toutes les interactions REST avec des clusters Hadoop. Il y a eu une nécessité d'un protocole de sécurité open source que les organisations peuvent personnaliser selon leurs besoins. Ce protocole n’est pas lié seulement aux interactions REST. La recherche présentée dans cette thèse propose une telle solution open source pour l'industrie. La passerelle de sécurité fournie utilise une extension de l’architecture orientée service SOA en adaptant notre modèle de cette architecture : Achievable Service Oriented Architecture (ASOA). Après l'émergence du Cloud, l'utilisation des technologies de l'information a été modifiée de façon significative, il y a aussi une nécessité des organisations utilisant des technologies légataires pour faire la transition de traitement des affaires en cours vers le traitement des Big Data dans le Cloud de manière sécurisée. Cette thèse conçoit et valide un protocole de passerelle de sécurité en utilisant ASOA. Il introduit également un service d'observation maître (Master Observer Service MOS), qui à son tour utilise le service de messagerie sécurisé (MSS) comme étant une capacité supplémentaire pour renforcer l'idée de la disponibilité de données sécurisée pour le traitement de MapReduce afin de faire face aux Big Data. La thèse présente une mise en oeuvre réelle en utilisant Business Process Engineering (BPR). La mise en oeuvre adaptée du Protocole Security Gateway a été implémentée dans plusieurs institutions financières utilisant Master Observer Service. Cela permet à l'institution de traiter leurs Big Data en utilisant MapReduce dans les sessions sécurisées en Cloud Computing

    From cluster databases to cloud storage: Providing transactional support on the cloud

    Get PDF
    Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area

    Service-oriented architecture for device lifecycle support in industrial automation

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores Especialidade: Robótica e Manufactura IntegradaThis thesis addresses the device lifecycle support thematic in the scope of service oriented industrial automation domain. This domain is known for its plethora of heterogeneous equipment encompassing distinct functions, form factors, network interfaces, or I/O specifications supported by dissimilar software and hardware platforms. There is then an evident and crescent need to take every device into account and improve the agility performance during setup, control, management, monitoring and diagnosis phases. Service-oriented Architecture (SOA) paradigm is currently a widely endorsed approach for both business and enterprise systems integration. SOA concepts and technology are continuously spreading along the layers of the enterprise organization envisioning a unified interoperability solution. SOA promotes discoverability, loose coupling, abstraction, autonomy and composition of services relying on open web standards – features that can provide an important contribution to the industrial automation domain. The present work seized industrial automation device level requirements, constraints and needs to determine how and where can SOA be employed to solve some of the existent difficulties. Supported by these outcomes, a reference architecture shaped by distributed, adaptive and composable modules is proposed. This architecture will assist and ease the role of systems integrators during reengineering-related interventions throughout system lifecycle. In a converging direction, the present work also proposes a serviceoriented device model to support previous architecture vision and goals by including embedded added-value in terms of service-oriented peer-to-peer discovery and identification, configuration, management, as well as agile customization of device resources. In this context, the implementation and validation work proved not simply the feasibility and fitness of the proposed solution to two distinct test-benches but also its relevance to the expanding domain of SOA applications to support device lifecycle in the industrial automation domain

    DevOps for Trustworthy Smart IoT Systems

    Get PDF
    ENACT is a research project funded by the European Commission under its H2020 program. The project consortium consists of twelve industry and research member organisations spread across the whole EU. The overall goal of the ENACT project was to provide a novel set of solutions to enable DevOps in the realm of trustworthy Smart IoT Systems. Smart IoT Systems (SIS) are complex systems involving not only sensors but also actuators with control loops distributed all across the IoT, Edge and Cloud infrastructure. Since smart IoT systems typically operate in a changing and often unpredictable environment, the ability of these systems to continuously evolve and adapt to their new environment is decisive to ensure and increase their trustworthiness, quality and user experience. DevOps has established itself as a software development life-cycle model that encourages developers to continuously bring new features to the system under operation without sacrificing quality. This book reports on the ENACT work to empower the development and operation as well as the continuous and agile evolution of SIS, which is necessary to adapt the system to changes in its environment, such as newly appearing trustworthiness threats

    DevOps for Trustworthy Smart IoT Systems

    Get PDF
    ENACT is a research project funded by the European Commission under its H2020 program. The project consortium consists of twelve industry and research member organisations spread across the whole EU. The overall goal of the ENACT project was to provide a novel set of solutions to enable DevOps in the realm of trustworthy Smart IoT Systems. Smart IoT Systems (SIS) are complex systems involving not only sensors but also actuators with control loops distributed all across the IoT, Edge and Cloud infrastructure. Since smart IoT systems typically operate in a changing and often unpredictable environment, the ability of these systems to continuously evolve and adapt to their new environment is decisive to ensure and increase their trustworthiness, quality and user experience. DevOps has established itself as a software development life-cycle model that encourages developers to continuously bring new features to the system under operation without sacrificing quality. This book reports on the ENACT work to empower the development and operation as well as the continuous and agile evolution of SIS, which is necessary to adapt the system to changes in its environment, such as newly appearing trustworthiness threats

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Composable simulation models and their formal validation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Survey of Template-Based Code Generation

    Full text link
    L'automatisation de la génération des artefacts textuels à partir des modèles est une étape critique dans l'Ingénierie Dirigée par les Modèles (IDM). C'est une transformation de modèles utile pour générer le code source, sérialiser les modèles dans de stockages persistents, générer les rapports ou encore la documentation. Parmi les différents paradigmes de transformation de modèle-au-texte, la génération de code basée sur les templates (TBCG) est la plus utilisée en IDM. La TBCG est une technique de génération qui produit du code à partir des spécifications de haut niveau appelées templates. Compte tenu de la diversité des outils et des approches, il est nécessaire de classifier et de comparer les techniques de TBCG existantes afin d'apporter un soutien approprié aux développeurs. L'objectif de ce mémoire est de mieux comprendre les caractéristiques des techniques de TBCG, identifier les tendances dans la recherche, et éxaminer l'importance du rôle de l'IDM par rapport à cette approche. J'évalue également l'expressivité, la performance et la mise à l'échelle des outils associés selon une série de modèles. Je propose une étude systématique de cartographie de la littérature qui décrit une intéressante vue d'ensemble de la TBCG et une étude comparitive des outils de la TBCG pour mieux guider les dévloppeurs dans leur choix. Cette étude montre que les outils basés sur les modèles offrent plus d'expressivité tandis que les outils basés sur le code sont les plus performants. Enfin, Xtend2 offre le meilleur compromis entre l'expressivité et la performance.A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text transformation paradigms, Template-Based Code Generation (TBCG) is the most popular in MDE. TBCG is a synthesis technique that produces code from high-level specifications, called templates. It is a popular technique in MDE given that they both emphasize abstraction and automation. Given the diversity of tools and approaches, it is necessary to classify and compare existing TBCG techniques to provide appropriate support to developers. The goal of this thesis is to better understand the characteristics of TBCG techniques, identify research trends, and assess the importance of the role of MDE in this code synthesis approach. We also evaluate the expressiveness, performance and scalability of the associated tools based on a range of models that implement critical patterns. To this end, we conduct a systematic mapping study of the literature that paints an interesting overview of TBCG and a comparative study on TBCG tools to better guide developers in their choices. This study shows that model-based tools offer more expressiveness whereas code-based tools performed much faster. Xtend2 offers the best compromise between the expressiveness and the performance

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore