525 research outputs found

    A modeling language for multi-tenant data architecture evolution in cloud applications

    Get PDF
    Multi-tenancy enables efficient resource utilization by sharing application resources across multiple customers (i.e., tenants). Hence, applications built using this pat- tern can be offered at a lower price and reduce maintenance effort as less application instances and supporting cloud resources must be maintained. These properties en- courage cloud application providers to adopt multi-tenancy to their existing applications, yet introducing this pattern requires significant changes in the application structure to address multi-tenancy requirements such as isolation of tenants, extensibility of the application, and scalability of the solution. In cloud applications, the data layer is often the prime candidate for multi-tenancy, and it usually comprises a combination of different cloud storage solutions such as blob storage, relational and non-relational databases. These storage types are conceptually and tangibly divergent, each requiring its own partitioning schemes to meet multi-tenancy requirements. Currently, multi-tenant data architectures are implemented using manual coding methods, at times following guidance and patterns offered by cloud providers. However, such manual implementation approach tends to be time consuming and error prone. Several modeling methods based on Model-Driven Engineer- ing (MDE) and Software Product Line Engineering (SPLE) have been proposed to capture multi-tenancy in cloud applications. These methods mainly generate cloud deployment configurations from an application model, though they do not automate implementation or evolution of applications. This thesis aims to facilitate development of multi-tenant cloud data architectures using model-driven engineering techniques. This is achieved by designing and implementing a novel modeling language, CadaML, that provides concepts and notations to model multi-tenant cloud data architectures in an abstract way. CadaML also provides a set of tools to validate the data architecture and automatically produce corresponding data access layer code. The thesis demonstrates the feasibility of the modeling language in a practical setting and adequacy of multi-tenancy implementation by the generated code on an industrial business process analyzing application. Moreover, the modeling language is empirically compared against manual implementation methods to inspect its effect on developer productivity, development effort, reliability of the application code, and usability of the language. These outcomes provide a strong argument that the CadaML modeling language effectively mitigates the high overhead of manual implementation of multi-tenant cloud data layers, significantly reducing the required development complexity and time

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    Logram: Efficient Log Parsing Using n-Gram Dictionaries

    Full text link
    Software systems usually record important runtime information in their logs. Logs help practitioners understand system runtime behaviors and diagnose field failures. As logs are usually very large in size, automated log analysis is needed to assist practitioners in their software operation and maintenance efforts. Typically, the first step of automated log analysis is log parsing, i.e., converting unstructured raw logs into structured data. However, log parsing is challenging, because logs are produced by static templates in the source code (i.e., logging statements) yet the templates are usually inaccessible when parsing logs. Prior work proposed automated log parsing approaches that have achieved high accuracy. However, as the volume of logs grows rapidly in the era of cloud computing, efficiency becomes a major concern in log parsing. In this work, we propose an automated log parsing approach, Logram, which leverages n-gram dictionaries to achieve efficient log parsing. We evaluated Logram on 16 public log datasets and compared Logram with five state-of-the-art log parsing approaches. We found that Logram achieves a similar parsing accuracy to the best existing approaches while outperforms these approaches in efficiency (i.e., 1.8 to 5.1 times faster than the second fastest approaches). Furthermore, we deployed Logram on Spark and we found that Logram scales out efficiently with the number of Spark nodes (e.g., with near-linear scalability) without sacrificing parsing accuracy. In addition, we demonstrated that Logram can support effective online parsing of logs, achieving similar parsing results and efficiency with the offline mode.Comment: 13 pages, IEEE journal forma

    Software development in the post-PC era : towards software development as a service

    Get PDF
    PhD ThesisEngineering software systems is a complex task which involves various stakeholders and requires planning and management to succeed. As the role of software in our daily life is increasing, the complexity of software systems is increasing. Throughout the short history of software engineering as a discipline, the development practises and methods have rapidly evolved to seize opportunities enabled by new technologies (e.g., the Internet) and to overcome economical challenges (e.g., the need for cheaper and faster development). Today, we are witnessing the Post-PC era. An era which is characterised by mobility and services. An era which removes organisational and geographical boundaries. An era which changes the functionality of software systems and requires alternative methods for conceiving them. In this thesis, we envision to execute software development processes in the cloud. Software processes have a software production aspect and a management aspect. To the best of our knowledge, there are no academic nor industrial solutions supporting the entire software development process life-cycle(from both production and management aspects and its tool-chain execution in the cloud. Our vision is to use the cloud economies of scale and leverage Model-Driven Engineering (MDE) to integrate production and management aspects into the development process. Since software processes are seen as workflows, we investigate using existing Workflow Management Systems to execute software processes and we find that these systems are not suitable. Therefore, we propose a reference architecture for Software Development as a Service (SDaaS). The SDaaS reference architecture is the first proposal which fully supports development of complex software systems in the cloud. In addition to the reference architecture, we investigate three specific related challenges and propose novel solutions addressing them. These challenges are: Modelling & enacting cloud-based executable software processes. Executing software processes in the cloud can bring several benefits to software develop ment. In this thesis, we discuss the benefits and considerations of cloud-based software processes and introduce a modelling language for modelling such processes. We refer to this language as EXE-SPEM. It extends the Software and Systems Process Engineering (SPEM2.0) OMG standard to support creating cloudbased executable software process models. Since EXE-SPEM is a visual modelling language, we introduce an XML notation to represent EXE-SPEM models in a machine-readable format and provide mapping rules from EXE-SPEM to this notation. We demonstrate this approach by modelling an example software process using EXE-SPEM and mapping it to the XML notation. Software process models expressed in this XML format can then be enacted in the proposed SDaaS architecture. Cost-e cient scheduling of software processes execution in the cloud. Software process models are enacted in the SDaaS architecture as workflows. We refer to them sometimes as Software Workflows. Once we have executable software process models, we need to schedule them for execution. In a setting where multiple software workflows (and their activities) compete for shared computational resources (workflow engines), scheduling workflow execution becomes important. Workflow scheduling is an NP-hard problem which refers to the allocation of su cient resources (human or computational) to workflow activities. The schedule impacts the workflow makespan (execution time) and cost as well as the computational resources utilisation. The target of the scheduling is to reduce the process execution cost in the cloud without significantly a ecting the process makespan while satisfying the special requirements of each process activity (e.g., executing on a private cloud). We adapt three workflow scheduling algorithms to fit for SDaaS and propose a fourth one; the Proportional Adaptive Task Schedule. The algorithms are then evaluated through simulation. The simulation results show that the our proposed algorithm saves between 19.74% and 45.78% of the execution cost, provides best resource (VM) utilisation and provides the second best makespan compared to the other presented algorithms. Evaluating the SDaaS architecture using a case study from the safety-critical systems domain. To evaluate the proposed SDaaS reference architecture, we instantiate a proof-of-concept implementation of the architecture. This imple mentation is then used to enact safety-critical processes as a case study. Engineering safety-critical systems is a complex task which involves multiple stakeholders. It requires shared and scalable computation to systematically involve geographically distributed teams. In this case study, we use EXE-SPEM to model a portion of a process (namely; the Preliminary System Safety Assessment - PSSA) adapted from the ARP4761 [2] aerospace standard. Then, we enact this process model in the proof-of-concept SDaaS implementation. By using the SDaaS architecture, we demonstrate the feasibility of our approach and its applicability to di erent domains and to customised processes. We also demonstrate the capability of EXE-SPEM to model cloud-based executable processes. Furthermore, we demonstrate the added value of the process models and the process execution provenance data recorded by the SDaaS architecture. This data is used to automate the generation of safety cases argument fragments. Thus, reducing the development cost and time. Finally, the case study shows that we can integrate some existing tools and create new ones as activities used in process models. The proposed SDaaS reference architecture (combined with its modelling, scheduling and enactment capabilities) brings the benefits of the cloud to software development. It can potentially save software production cost and provide an accessible platform that supports collaborating teams (potentially across di erent locations). The executable process models support unified interpretation and execution of processes across team(s) members. In addition, the use of models provide managers with global awareness and can be utilised for quality assurance and process metrics analysis and improvement. We see the contributions provided in this thesis as a first step towards an alternative development method that uses the benefits of cloud and Model-Driven Engineering to overcome existing challenges and open new opportunities. However, there are several challenges that are outside the scope of this study which need to be addressed to allow full support of the SDaaS vision (e.g., supporting interactive workflows). The solutions provided in this thesis address only part of a bigger vision. There is also a need for empirical and usability studies to study the impact of the SDaaS architecture on both the produced products (in terms of quality, cost, time, etc.) and the participating stakeholders

    Extending cloud-based applications in challenged environments with mobile opportunistic networks

    Get PDF
    With the tremendous growth of mobile devices, e.g, smartphones, tablets and PDAs in recent years, users are looking for more advanced platforms in order to use their computational applications (e.g., processing and storage) in a faster and more convenient way. In addition, mobile devices are capable of using cloud-based applications and the use of such technology is growing in popularity. However, one major concern is how to efficiently access these cloud-based applications when using a resource-constraint mobile device. Essentially applications require a continuous Internet connection which is difficult to obtain in challenged environments that lack an infrastructure for communication (e.g., in sparse or rural areas) or areas with infrastructure (e.g., urban or high density areas) with restricted/full of interference access networks and even areas with high costs of Internet roaming. In these situations the use of mobile opportunistic networks may be extended to avail cloud-based applications to the user. In this thesis we explore the emergence of extending cloud-based applications with mobile opportunistic networks in challenged environments and observe how local user’s social interactions and collaborations help to improve the overall message delivery performance in the network. With real-world trace-driven simulations, we compare and contrast the different user’s behaviours in message forwarding, the impact of the various network loads (e.g., number of messages) along with the long-sized messages and the impact of different wireless networking technologies, in various opportunistic routing protocols in a challenged environment

    EARLY ASSESSMENT OF SERVICE PERFORMANCE USING SIMULATION

    Get PDF
    The success of web services is changing the way in which software is designed, developed, and distributed. The increasing diffusion of software in the form services, available as commodities over the Internet, has en- abled business scenarios where processes are implemented by composing loosely-coupled services chosen at runtime. Services are in fact continuously re-designed and incrementally developed, released in heterogeneous and distributed environments, and selected and integrated at runtime within external business processes. In this dynamic context, there is the need of solutions supporting the evaluation of service performance at an early stage of the software development process, or even at design time, to support users in an a priori evaluation of the impact, a given service might have when integrated in their business process. A number of useful performance verification and validation techniques are proposed to test and simulate web services, but they assume the availability of service code or at least of reliable information (e.g., collected by testing) on service behavior. Among these approaches, simulation-based techniques are mostly used to assess the behavior of the service and predict its behavior using historical data. Despite the benefits of such solutions, few proposals have addressed the problem of how service performance can be assessed at design time and how historical data can be replaced by simulation data for performance evaluation at early stage of development cycle. In this thesis, the notion of simulation is fully integrated within early phases of the software development process in order to predict the behavior of services. We propose model-based approaches that rely on the amount of information available for the simulation of the performance of service operations. We distinguish full-knowledge, partial-knowledge and zero-knowledge scenarios. In a full-knowledge scenario, the total execution times for each operation and the internal distributions of delays are known and used for performance evaluation. In a partial-knowledge scenario, partial testing results (i.e., the lower and upper bounds to the operation execution times) are used to simulate a service performance. In the zero-knowledge scenario, no testing results are considered; only simulation results are used for performance evaluation. The main contributions of this thesis can be summarized as follows. Firstly, we proposed a model-based approach that relies on Symbolic Transition System (STS) to describe the web services as finite state automata and evaluate their performance. This model was extended for testing and simulation. The testing model annotates model transitions with performance idioms, which allow to evaluate the behavior of the service. The simulation model extends the standard STS-based model with transition probabilities and delay distributions. This model is used to generate a simulation script that allows to simulate the service behavior. Our methodology used simulation along the design and pre-deployment phases of the web service lifecycle to preliminarily assess web service performance using coarse-grained information on the total execution time of each service operation derived by testing. We used testing results and provided some practical examples to validate our methodology and the quality of the performance measurements computed by simulation considering the full-knowledge and partial-knowledge scenarios. The results obtained showed that our simulation gives accurate estimation of the execution times. Secondly, the thesis proposed an approach that permits service developers and software adopters to evaluate service performance in a zero-knowledge scenario, where testing results and service code are not yet available. Our approach is built on expert knowledge to estimate the execution time of the service operation. It evaluates the complexity of the service operation using the input and output Simple Object Access Protocol (SOAP) messages, and the Web Service Description Language (WSDL) interface of the service. Then, the operation interval of execution times is estimated based on profile tables providing the time overhead needed to parse and build SOAP messages, and the performance inferred from the testing of some reference service operations. Our simulation results showed that our zero-knowledge approach gives an accurate approximation of the interval of execution times when compared with the testing results at the end of the development. Thirdly, the thesis proposed an application of our previous approaches to the definition of a framework that allows to negotiate and monitoring the performance Service Level Agreement (SLA) of the web service based on the simulation data. The solution for SLA monitoring is based on the STS-based model for testing and the solution for SLA negotiation is based on the service model for simulation. This work provides an idea about the SLA of the service in advance and how to handle the violations of the SLA on performance after the service deployment.Le succ\ue8s des services Web est entrain de changer la fa\ue7on dont le logiciel est con\ue7u, d\ue9velopp\ue9 et distribu\ue9. La diffusion croissante des logiciels sous forme de services, disponibles en tant que produits sur Internet, a permis la d\ue9finition de sc\ue9narios d\u2019entreprise o\uf9 les processus sont mis en \u153uvre par la composition de services faiblement coupl\ue9s, choisis au moment de l\u2019ex\ue9cution. Les services sont en effet en permanence re-con\ue7us et d\ue9velopp\ue9s progressivement, publi\ue9s dans des environnements h\ue9t\ue9rog\ue8nes et distribu\ue9s, et s\ue9lectionn\ue9s et int\ue9gr\ue9s \ue0 l\u2019ex\ue9cution dans les processus externes d\u2019entreprise. Dans ce contexte dynamique, il est n\ue9cessaire d\u2019avoir des solutions permettant l'\ue9valuation de la performance du service \ue0 un stade pr\ue9coce du processus de d\ue9veloppement des logiciels, ou encore au moment de la conception, afin de permettre aux utilisateurs de faire une \ue9valuation \u201ca priori\u201d de l\u2019impact qu\u2019un service donn\ue9 peut avoir quand il est int\ue9gr\ue9 dans leur processus d\u2019entreprise. Un certain nombre de techniques de v\ue9rification et de validation des performances utiles sont propos\ue9es pour tester et simuler les services web, mais elles requi\ue8rent la disponibilit\ue9 du code source du service ou au moins d\u2019informations fiables (par exemple, recueillies par test) sur le comportement du service. Parmi ces approches, les techniques bas\ue9es sur la simulation sont principalement utilis\ue9es pour \ue9valuer le comportement du service et pr\ue9dire son comportement en utilisant des donn\ue9es obtenues par test. Malgr\ue9 les avantages de ces solutions, peu de propositions ont abord\ue9 le probl\ue8me li\ue9 \ue0 la mani\ue8re dont la performance du service peut \ueatre \u301\ue9valu\ue9e au moment de la conception et comment les donn\ue9es de test peuvent \ueatre remplac\ue9es par les donn\ue9es de simulation en vue de l\u2019\ue9valuation de la performance \ue0 un stade pr\ue9coce du cycle de d\ue9veloppement. Dans cette th\ue8se, la notion de simulation est enti\ue8rement int\ue9gr\ue9e dans les premi\ue8res phases du processus de d\ue9veloppement des logiciels afin de pr\ue9dire le comportement des services. Nous proposons des approches bas\ue9es sur l\u2019utilisation de mod\ue8les s\u2019appuyant sur la quantit\ue9 d\u2019informations disponibles pour la simulation de la performance des op\ue9rations du service web. Nous distinguons les sc\ue9narios full-knowledge, partial-knowledge et zero-knowledge. Dans un sc\ue9nario full-knowledge, les temps d\u2019ex\ue9cution total de chaque op\ue9ration et les distributions internes des d\ue9lais sont connus et utilis\ue9s pour l\u2019\ue9valuation des performances. Dans un sc\ue9nario partial-knowledge, les r\ue9sultats des tests partiels (par exemple, les bornes inf\ue9rieures et sup\ue9rieures des temps d\u2019ex\ue9cution de l\u2019op\ue9ration) sont utilis\ue9s pour simuler la performance du service web. Dans le sc\ue9nario zero-knowledge, aucun r\ue9sultat de test n\u2019est consid\ue9r\ue9, seuls les r\ue9sultats de simulation sont utilis\ue9s pour l\u2019\ue9valuation des performances. Les principales contributions de cette th\ue8se peuvent \ueatre r\ue9sum\ue9es comme suit. Premi\ue8rement, nous avons propos\ue9 une approche bas\ue9e sur l\u2019utilisation de mod\ue8le qui s\u2019appuie sur le Syst\ue8me de Transition Symbolique ( STS ) pour d\ue9crire les services web comme des automates \ue0 \u301\ue9tats finis et \ue9valuer leur performance. Ce mod\ue8le a \u301\ue9t\ue9 \ue9tendu pour les tests et la simulation. Le mod\ue8le de test ajoute aux transitions du mod\ue8le STS standard des idiomes de performance, qui permettent d\u2019\ue9valuer le comportement du service. Cependant, le mod\ue8le de simulation \ue9tend le mod\ue8le STS standard avec des probabilit\ue9s de transition et les distributions de d\ue9lais. Ce mod\ue8le est utilis\ue9 pour g\ue9n\ue9rer un script de simulation permettant de simuler le comportement du service. Notre m\ue9thodologie utilise la simulation tout au long des phases de conception et de pr\ue9-d\ue9ploiement du cycle de vie des services web pour une \ue9valuation pr\ue9liminaire de la performance des services web en utilisant les informations brutes sur le temps total d\u2019ex\ue9cution de chaque op\ue9ration du service web provenant des tests. Nous avons utilis\ue9 les r\ue9sultats des tests et fourni des exemples concrets pour valider notre m\ue9thodologie et la qualit\ue9 des mesures de performance obtenues par simulation en consid\ue9rant les sc\ue9narios full-knowledge et partial-knowledge. Les r\ue9sultats obtenus ont montr\ue9 que notre simulation donne une estimation pr\ue9cise des temps d\u2019ex\ue9cution. Deuxi\ue8mement, notre th\ue8se a propos\ue9 une approche qui permet aux d\ue9veloppeurs de services web et aux utilisateurs des logiciels d\u2019\ue9valuer la performance des services en consid\ue9rant le sc\ue9nario zero-knowledge , o\uf9 les r\ue9sultats des tests et le code source des services ne sont pas encore disponibles. Notre approche est fond\ue9e sur les connaissances des experts pour estimer le temps d\u2019ex\ue9cution de l\u2019op\ue9ration du service web. Il \ue9value la complexit\ue9 de l\u2019op\ue9ration en utilisant les messages SOAP (Simple Object Access Protocol) d\u2019entr\ue9e et de sortie et l\u2019interface de description WSDL (Web Service Description Language) du service. Ensuite, l\u2019intervalle du temps d\u2019ex\ue9cution de l\u2019op\ue9ration est estim\ue9 sur la base des tables de profils fournissant le temps n\ue9cessaire pour parser et construire les messages SOAP, et la performance d\ue9duite \ue0 partir du test de certaines op\ue9rations de web services de r\ue9f\ue9rence. Nos r\ue9sultats de simulation ont montr\ue9 que notre sc\ue9nario zero-knowledge donne une bonne approximation de l\u2019intervalle du temps d\u2019ex\ue9cution par rapport aux r\ue9sultats des tests obtenus \ue0 la fin du d\ue9veloppement. Troisi\ue8mement, cette th\ue8se propose une application de nos pr\ue9c\ue9dentes approches pour la mise en place d\u2019un framework qui permet de n\ue9gocier et de surveiller le contrat de niveau de service (SLA) sur la performance du service web en se basant sur les donn\ue9es de simulation. La solution pour le suivi du contrat de niveau de service est bas\ue9e sur le mod\ue8le STS \ue9tendu pour le test et la solution de n\ue9gociation du niveau de service est bas\ue9e sur le mod\ue8le de service \ue9tendu pour la simulation. Ce travail fournit \ue0 l\u2019avance une id\ue9e sur le contrat de performance du service et la fa\ue7on dont les violations du contrat sont trait\ue9es apr\ue8s le d\ue9ploiement du service web

    Digital transformation in the manufacturing industry : business models and smart service systems

    Get PDF
    The digital transformation enables innovative business models and smart services, i.e. individual services that are based on data analyses in real-time as well as information and communications technology. Smart services are not only a theoretical construct but are also highly relevant in practice. Nine research questions are answered, all related to aspects of smart services and corresponding business models. The dissertation proceeds from a general overview, over the topic of installed base management as precondition for many smart services in the manufacturing industry, towards exemplary applications in form of predictive maintenance activities. A comprehensive overview is provided about smart service research and research gaps are presented that are not yet closed. It is shown how a business model can be developed in practice. A closer look is taken on installed base management. Installed base data combined with condition monitoring data leads to digital twins, i.e. dynamic models of machines including all components, their current conditions, applications and interaction with the environment. Design principles for an information architecture for installed base management and its application within a use case in the manufacturing industry indicate how digital twins can be structured. In this context, predictive maintenance services are taken for the purpose of concretization. It is looked at state oriented maintenance planning and optimized spare parts inventory as exemplary approaches for smart services that contribute to high machine availability. Taxonomy of predictive maintenance business models shows their diversity. It is viewed on the named topics both from theoretical and practical viewpoints, focusing on the manufacturing industry. Established research methods are used to ensure academic rigor. Practical problems are considered to guarantee practical relevance. A research project as background and the resulting collaboration with different experts from several companies also contribute to that. The dissertation provides a comprehensive overview of smart service topics and innovative business models for the manufacturing industry, enabled by the digital transformation. It contributes to a better understanding of smart services in theory and practice and emphasizes the importance of innovative business models in the manufacturing industry

    Sixth Biennial Report : August 2001 - May 2003

    No full text
    • …
    corecore