6 research outputs found
Continuous Deployment Transitions at Scale
Predictable, rapid, and data-driven feature rollout; lightning-fast; and automated fix deployment are some of the benefits most large software organizations worldwide are striving for. In the process, they are transitioning toward the use of continuous deployment practices. Continuous deployment enables companies to make hundreds or thousands of software changes to live computing infrastructure every day while maintaining service to millions of customers. Such ultra-fast changes create a new reality in software development. Over the past four years, the Continuous Deployment Summit, hosted at Facebook, Netflix, Google, and Twitter has been held. Representatives from companies like Cisco, Facebook, Google, IBM, Microsoft, Netflix, and Twitter have shared the triumphs and struggles of their transition to continuous deployment practices—each year the companies press on, getting ever faster. In this chapter, the authors share the common strategies and practices used by continuous deployment pioneers and adopted by newcomers as they transition and use continuous deployment practices at scale
Zero-Downtime SQL Database Schema Evolution for Continuous Deployment
When a web service or application evolves, its database schema — tables, constraints, and indices — often need to evolve along with it. Depending on the database, some of these changes require a full table lock, preventing the service from accessing the tables under change. To deal with this, web services are typically taken offline momentarily to modify the database schema. However with the introduction of concepts like Continuous Deployment, web services are deployed into their production environments every time the source code is modified. Having to take the service offline — potentially several times a day — to perform schema changes is undesirable. In this paper we introduce QuantumDB— a tool-supported approach that abstracts this evolution process away from the web service without locking tables. This allows us to redeploy a web service without needing to take it offline even when a database schema change is necessary. In addition QuantumDB puts no restrictions on the method of deployment, supports schema changes to multiple tables using changesets, and does not subvert foreign key constraints during the evolution process. We evaluate QuantumDB by applying 19 synthetic and 95 industrial evolution scenarios to our open source implementation of QuantumDB. These experiments demonstrate that QuantumDB realizes zero- downtime migrations at the cost of acceptable overhead, and is applicable in industrial continuous deployment contexts.Software Technolog
Zero-Downtime SQL Database Schema Evolution for Continuous Deployment
When a web service or application evolves, its database schema --- tables, constraints, and indices --- often need to evolve along with it. Depending on the database, some of these changes require a full table lock, preventing the service from accessing the tables under change. To deal with this, web services are typically taken offline momentarily to modify the database schema. However with the introduction of concepts like Continuous Deployment, web services are deployed into their production environments every time the source code is modified. Having to take the service offline --- potentially several times a day --- to perform schema changes is undesirable. In this paper we introduce QuantumDB --- a middleware solution that abstracts this evolution process away from the web service without locking tables. This allows us to redeploy a web service without needing to take it offline even when a database schema change is necessary. In addition QuantumDB puts no restrictions on the method of deployment, supports schema changes to multiple tables using changesets, and does not subvert foreign key constraints during the evolution process. We evaluate QuantumDB by applying 19 synthetic and 81 industrial evolution scenarios to our open source implementation of QuantumDB. These experiments demonstrate that QuantumDB realizes zero-downtime schema evolution at the cost of acceptable overhead, and is applicable in industrial Continuous Deployment contexts.The Software Engineering Research GroupSoftware TechnologyElectrical Engineering, Mathematics and Computer Scienc
An empirical study of architecting for continuous delivery and deployment
Recently, many software organizations have been adopting Continuous Delivery
and Continuous Deployment (CD) practices to develop and deliver quality
software more frequently and reliably. Whilst an increasing amount of the
literature covers different aspects of CD, little is known about the role of
software architecture in CD and how an application should be (re-) architected
to enable and support CD. We have conducted a mixed-methods empirical study
that collected data through in-depth, semi-structured interviews with 21
industrial practitioners from 19 organizations, and a survey of 91 professional
software practitioners. Based on a systematic and rigorous analysis of the
gathered qualitative and quantitative data, we present a conceptual framework
to support the process of (re-) architecting for CD. We provide evidence-based
insights about practicing CD within monolithic systems and characterize the
principle of "small and independent deployment units" as an alternative to the
monoliths. Our framework supplements the architecting process in a CD context
through introducing the quality attributes (e.g., resilience) that require more
attention and demonstrating the strategies (e.g., prioritizing operations
concerns) to design operations-friendly architectures. We discuss the key
insights (e.g., monoliths and CD are not intrinsically oxymoronic) gained from
our study and draw implications for research and practice.Comment: To appear in Empirical Software Engineerin