41 research outputs found

    Digital content popularity counting with Amazon Web Services

    Get PDF
    The page hit counter system processes, counts and stores page hit counts gathered from page hit events from a news media company’s websites and mobile applications. The system serves a public application interface which can be queried over the internet for page hit count information. In this thesis I will describe the process of replacing a legacy page hit counter system with a modern implementation in the Amazon Web Services ecosystem utilizing serverless technologies. The process includes the background information, the project requirements, the design and comparison of different options, the implementation details and the results. Finally, I will show how the new system implemented with Amazon Kinesis, AWS Lambda and Amazon DynamoDB has running costs that are less than half of that of the old one’s

    Cloud-based data-intensive framework towards fault diagnosis in large-scale petrochemical plants

    Get PDF
    Industrial Wireless Sensor Networks (IWSNs) are expected to offer promising monitoring solutions to meet the demands of monitoring applications for fault diagnosis in large-scale petrochemical plants, however, involves heterogeneity and Big Data problems due to large amounts of sensor data with high volume and velocity. Cloud Computing is an outstanding approach which provides a flexible platform to support the addressing of such heterogeneous and data-intensive problems with massive computing, storage, and data-based services. In this paper, we propose a Cloud-based Data-intensive Framework (CDF) for on-line equipment fault diagnosis system that facilitates the integration and processing of mass sensor data generated from Industrial Sensing Ecosystem (ISE). ISE enables data collection of interest with topic-specific industrial monitoring systems. Moreover, this approach contributes the establishment of on-line fault diagnosis monitoring system with sensor streaming computing and storage paradigms based on Hadoop as a key to the complex problems. Finally, we present a practical illustration referred to this framework serving equipment fault diagnosis systems with the ISE

    Managing IT Operations in a Cloud-driven Enterprise: Case Studies

    Get PDF
    Enterprise IT needs a new approach to manage processes, applications and infrastructure which are distributed across a mix of environments. In an Enterprise traditionally a request to deliver an application to business could take weeks or months due to decision-making functions, multiple approval bodies and processes that exist within IT departments. These delays in delivering a requested service can lead to dissatisfaction, with the result that the line-of-business group may seek alternative sources of IT capabilities. Also the complex IT infrastructure of these enterprises cannot keep up with the demand of new applications and services from an increasingly dispersed and mobile workforce which results in slower rollout of critical applications and services, limited resources, poor operation visibility and control. In such scenarios, it’s better to adopt cloud services to substitute for new application deployment otherwise most Enterprise IT organizations face the risk of losing 'market share' to the Public Cloud. Using Cloud Model the organizations should increase ROI, lower TCO and operate with seamless IT operations. It also helps to beat shadow IT and the practice of resource over-or under provisioning. In this research paper we have given two case studies where we migrated two Enterprise IT application to public clouds for the purpose of lower TCO and higher ROI. By migrating, the IT organizations improved IT agility, enterprise-class software for performance, security and control. In this paper, we also focus on the advantages and challenges while adopting cloud services

    When to Utilize Software as a Service

    Get PDF
    Cloud computing enables on-demand network access to shared resources (e.g., computation, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers. Software as a service (SaaS) is part of cloud computing. It is one of the cloud service models. SaaS is software deployed as a hosted service and accessed over the Internet. In SaaS, the consumer uses the provider‘s applications running in the cloud. SaaS separates the possession and ownership of software from its use. The applications can be accessed from any device through a thin client interface. A typical SaaS application is used with a web browser based on monthly pricing. In this thesis, the characteristics of cloud computing and SaaS are presented. Also, a few implementation platforms for SaaS are discussed. Then, four different SaaS implementation cases and one transformation case are deliberated. The pros and cons of SaaS are studied. This is done based on literature references and analysis of the SaaS implementations and the transformation case. The analysis is done both from the customer‘s and service provider‘s point of view. In addition, the pros and cons of on-premises software are listed. The purpose of this thesis is to find when SaaS should be utilized and when it is better to choose a traditional on-premises software. The qualities of SaaS bring many benefits both for the customer as well as the provider. A customer should utilize SaaS when it provides cost savings, ease, and scalability over on-premises software. SaaS is reasonable when the customer does not need tailoring, but he only needs a simple, general-purpose service, and the application supports customer‘s core business. A provider should utilize SaaS when it offers cost savings, scalability, faster development, and wider customer base over on-premises software. It is wise to choose SaaS when the application is cheap, aimed at mass market, needs frequent updating, needs high performance computing, needs storing large amounts of data, or there is some other direct value from the cloud infrastructure.Siirretty Doriast

    Big Data Analytics and Application Deployment on Cloud Infrastructure

    Get PDF
    This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Rise of the Planet of Serverless Computing: A Systematic Review

    Get PDF
    Serverless computing is an emerging cloud computing paradigm, being adopted to develop a wide range of software applications. It allows developers to focus on the application logic in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, its unique characteristic poses new challenges to the development and deployment of serverless-based applications. To tackle these challenges, enormous research efforts have been devoted. This paper provides a comprehensive literature review to characterize the current research state of serverless computing. Specifically, this paper covers 164 papers on 17 research directions of serverless computing, including performance optimization, programming framework, application migration, multi-cloud development, testing and debugging, etc. It also derives research trends, focus, and commonly-used platforms for serverless computing, as well as promising research opportunities

    A mixed-method empirical study of Function-as-a-Service software development in industrial practice

    Get PDF
    Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing mod- els. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is nosystematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on. We present results from a mixed-method study, combining interviews with practitioners who develop applications and systems that use FaaS, a systematic analysis of grey literature, and a Web-based survey. We find that successfully adopting FaaS requires a different mental model, where systems are primarily constructed by composing pre-existing services, with FaaS often acting as the “glue” that brings these services to- gether. Tooling availability and maturity, especially related to testing and deployment, remains a major difficulty. Further, we find that current FaaS systems lack systematic support for function reuse, and ab- stractions and programming models for building non-trivial FaaS applications are limited. We conclude with a discussion of implications for FaaS providers, software developers, and researchers
    corecore