8 research outputs found

    A mixed-method empirical study of Function-as-a-Service software development in industrial practice

    Get PDF
    Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing mod- els. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is nosystematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on. We present results from a mixed-method study, combining interviews with practitioners who develop applications and systems that use FaaS, a systematic analysis of grey literature, and a Web-based survey. We find that successfully adopting FaaS requires a different mental model, where systems are primarily constructed by composing pre-existing services, with FaaS often acting as the “glue” that brings these services to- gether. Tooling availability and maturity, especially related to testing and deployment, remains a major difficulty. Further, we find that current FaaS systems lack systematic support for function reuse, and ab- stractions and programming models for building non-trivial FaaS applications are limited. We conclude with a discussion of implications for FaaS providers, software developers, and researchers

    A mixed-method empirical study of Function-as-a-Service software development in industrial practice

    Get PDF
    Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing models. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is no systematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on. We present results from a mixed-method study, combining interviews with practitioners who develop applications and systems that use FaaS, a systematic analysis of grey literature, and a Web-based survey. We find that successfully adopting FaaS requires a different mental model, where systems are primarily constructed by composing pre-existing services, with FaaS often acting as the “glue” that brings these services together. Tooling availability and maturity, especially related to testing and deployment, remains a major difficulty. Further, we find that current FaaS systems lack systematic support for function reuse, and abstractions and programming models for building non-trivial FaaS applications are limited. We conclude with a discussion of implications for FaaS providers, software developers, and researchers

    The Effects of Cloud Computing and Internet of Things on the Next Generation Internet

    Get PDF
    Two separate yet crucial technologies that are influencing our lives more and more are cloud computing and IoT. It is anticipated that they will be widely adopted, making them essential elements of the Future Internet (FI). IoT improves our daily life by enabling connectivity and communication across several devices. However, flexible network access offered by cloud computing makes it possible to integrate dynamic data from several sources. Nonetheless, there are a number of difficulties in integrating IoT and cloud computing in the FI. Our goal in this research paper is to present and analyze the fundamental ideas behind cloud computing and the Internet of Thing

    The Journey to Serverless Migration: An Empirical Analysis of Intentions, Strategies, and Challenges

    Full text link
    Serverless is an emerging cloud computing paradigm that facilitates developers to focus solely on the application logic rather than provisioning and managing the underlying infrastructure. The inherent characteristics such as scalability, flexibility, and cost efficiency of serverless computing, attracted many companies to migrate their legacy applications toward this paradigm. However, the stateless nature of serverless requires careful migration planning, consideration of its subsequent implications, and potential challenges. To this end, this study investigates the intentions, strategies, and technical and organizational challenges while migrating to a serverless architecture. We investigated the migration processes of 11 systems across diverse domains by conducting 15 in-depth interviews with professionals from 11 organizations. we also presented a detailed discussion of each migration case. Our findings reveal that large enterprises primarily migrate to enhance scalability and operational efficiency, while smaller organizations intend to reduce the cost. Furthermore, organizations use a domain-driven design approach to identify the use case and gradually migrate to serverless using a strangler pattern. However, migration encounters technical challenges i.e., testing event-driven architecture, integrating with the legacy system, lack of standardization, and organizational challenges i.e., mindset change and hiring skilled serverless developers as a prominent. The findings of this study provide a comprehensive understanding that can guide future implementations and advancements in the context of serverless migration

    Benefitting from the Grey Literature in Software Engineering Research

    Full text link
    Researchers generally place the most trust in peer-reviewed, published information, such as journals and conference papers. By contrast, software engineering (SE) practitioners typically do not have the time, access or expertise to review and benefit from such publications. As a result, practitioners are more likely to turn to other sources of information that they trust, e.g., trade magazines, online blog-posts, survey results or technical reports, collectively referred to as Grey Literature (GL). Furthermore, practitioners also share their ideas and experiences as GL, which can serve as a valuable data source for research. While GL itself is not a new topic in SE, using, benefitting and synthesizing knowledge from the GL in SE is a contemporary topic in empirical SE research and we are seeing that researchers are increasingly benefitting from the knowledge available within GL. The goal of this chapter is to provide an overview to GL in SE, together with insights on how SE researchers can effectively use and benefit from the knowledge and evidence available in the vast amount of GL

    Military Breaking Boundaries Implementing Third-Party Cloud Computing Practices for Data Storage

    Get PDF
    Senior Information Technology (IT) military leadership cannot currently implement, maintain, and administer cloud data storage without the direct support of third-party vendors. This study explicitly impacts cloud practitioners, engineers, and architects requiring a most sophisticated and streamlined ability to safehouse invaluable data using third-party data storage. Grounded in the theory of planned behavior, the purpose of this qualitative single case study was to investigate strategies military leadership uses to implement third-party cloud computing for data storage. The participants (n = 22) consisted of cloud administrators, engineers, and architects within a sizeable midwestern city with a minimum of 3 years of cloud computing knowledge and 5 years of total IT experience. Data collection included semistructured interviews using Skype, face-to-face, and telephone interviews, and internal and external organizational documents (n = 17). Four themes were identified through thematic analysis: work relationships amongst AWS vendors and military technicians, the strength of newly created security practices, all training/learning curves are considered, and continuous safety and improvement. It is recommended that both AWS and military technicians continue to work together, promoting safety and security. The implications for positive social change include the potential for job creation and enhancing the community economically

    Adaptive monitoring and control framework in Application Service Management environment

    Get PDF
    The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh execution time Service Level Agreements, such as real–time systems, or systems running under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work, the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures
    corecore