368 research outputs found

    Design and evaluation of a cloud native data analysis pipeline for cyber physical production systems

    Get PDF
    Since 1991 with the birth of the World Wide Web the rate of data growth has been growing with a record level in the last couple of years. Big companies tackled down this data growth with expensive and enormous data centres to process and get value of this data. From social media, Internet of Things (IoT), new business process, monitoring and multimedia, the capacities of those data centres started to be a problem and required continuos and expensive expansion. Thus, Big Data was something that only a few were able to access. This changed fast when Amazon launched Amazon Web Services (AWS) around 15 years ago and gave the origins to the public cloud. At that time, the capabilities were still very new and reduced but 10 years later the cloud was a whole new business that changed for ever the Big Data business. This not only commoditised computer power but it was accompanied by a price model that let medium and small players the possibility to access it. In consequence, new problems arised regarding the nature of these distributed systems and the software architectures required for proper data processing. The present job analyse the type of typical Big Data workloads and propose an architecture for a cloud native data analysis pipeline. Lastly, it provides a chapter for tools and services that can be used in the architecture taking advantage of their open source nature and the cloud price models.Fil: Ferrer Daub, Facundo Javier. Universidad Católica de Córdoba. Instituto de Ciencias de la Administración; Argentin

    Sovelluskehyslähtöisen web-sovelluksen arkkitehtuurin ja kehittämisen parantaminen pilviympäristössä

    Get PDF
    Frameworks are widely used in web development. As web frameworks provide the developer with basic functionality, such as object-relational mapping and database abstraction, the developer has more time to concentrate on the actual problem. A common practice has been that framework-based web applications are deployed in a single-server environment. As the advent of cloud computing has made cloud platforms popular, web developers are facing a change in their working environment. The change does not only affect the environment but also the techniques and practices used within the current web frameworks. The main problem is that the web frameworks have their roots in the single-server era when cloud platforms did not exist in such large scale as they do currently. In this thesis we focus on how web developers can use their previous competence in the cloud environment. The research is done by developing an example application using the Django web framework. The example application is then deployed to the Heroku cloud platform. The example application and its implementation are used throughout the thesis to identify the most common pitfalls a developer might encounter while deploying a framework-based web application to a cloud platform. The pitfalls are analysed in order to find the root causes for why the current web frameworks do not fully fit to the cloud environment. The findings show that most of the pitfalls are related to using web framework practices or techniques that e.g. store the application state inside the server’s memory or the local file system. As the cloud platform environment is a distributed system, the application state should be stored in a persistent storage and made accessible for each web server. In addition, the findings tell that developers must pay extra attention to application design and architecture in the cloud environment. The thesis gives an analysis method for choosing third-party plugins and suggests ways to improve framework-based development. By specifying essential cloud platform features, the needs of an elastic cloud application are defined. The conclusions provide insight into the current status of web development and discuss how the web development can be improved by using the current cloud platforms and web frameworks

    Security Challenges from Abuse of Cloud Service Threat

    Get PDF
    Cloud computing is an ever-growing technology that leverages dynamic and versatile provision of computational resources and services. In spite of countless benefits that cloud service has to offer, there is always a security concern for new threats and risks. The paper provides a useful introduction to the rising security issues of Abuse of cloud service threat, which has no standard security measures to mitigate its risks and vulnerabilities. The threat can result an unbearable system gridlock and can make cloud services unavailable or even complete shutdown. The study has identified the potential challenges, as BotNet, BotCloud, Shared Technology Vulnerability and Malicious Insiders, from Abuse of cloud service threat. It has further described the attacking methods, impacts and the reasons due to the identified challenges. The study has evaluated the current available solutions and proposed mitigating security controls for the security risks and challenges from Abuse of cloud services threat

    Nucleus - Unified Deployment and Management for Platform as a Service

    Get PDF
    Cloud computing promises several advantages over classic IT models and has undoubtedly been one of the most hyped topics in the industry over the last couple of years. Besides the established delivery models Infrastructure as a Service (IaaS) and Software as a Service (SaaS), especially Platform as a Service (PaaS) has attracted significant attention these days. PaaS facilitates the hosting of scalable applications in the cloud by providing managed and highly automated application environments. Although most offerings are conceptually comparable to each other, the interfaces for application deployment and management vary greatly between vendors. Despite providing similar functionalities, technically different workflows and commands provoke vendor lock-in and hinder portability as well as interoperability. In this study, we present the tool Nucleus, which realizes a unified interface for application deployment and management among cloud platforms. With its help, we aim to increase the portability of PaaS applications and thus help to avoid critical vendor lock-in effects

    Cloud-Based Software Engineering : Proceedings of the Seminar No. 58312107

    Get PDF
    The seminar on cloud-based software engineering in 2013 covered many interesting topics related to cloud computing and software engineering. These proceedings focus on decision support for moving to the cloud, on opportunities that cloud computing provides to software engineering, and on security aspects that are associated to cloud computing. Moving to the Cloud – Options, Criteria, and Decision Making: Cloud computing can enable or facilitate software engineering activities through the use of computational, storage and other resources over the network. Organizations and individuals interested in cloud computing must balance the potential benefits and risks which are associated with cloud computing. It might not always be worthwhile to transfer existing services and content to external or internal, public or private clouds for a number of reasons. Standardized information and metrics from the cloud service providers may help to make the decision which provider to choose. Care should be taken when making the decision as switching from one service provider to another can be burdensome due to the incompatibilities between the providers. Hardware in data centers is not infallible: the equipment that powers cloud computing services is as prone to failure as any computing equipment put to high stress which can have an effect on the availability of services. Software Engineering – New Opportunities with the Cloud: Public and private clouds can be platforms for the services produced by parties but the cloud computing resources and services can be helpful during software development as well. Tasks like testing or compiling - which might take a long time to complete on a single, local, workstation - can be shifted to run on network resources for improved efficiency. Collaborative tools that take advantage of some of the features of cloud computing can also potentially boost communication in software development projects spread across the globe. Security in the Cloud – Overview and Recommendations: In an environment where the resources can be shared with other parties and controlled by a third party, security is one matter that needs to be addressed. Without encryption, the data stored in third-party-owned network storage is vulnerable and thus secure mechanisms are needed to keep the data safe. The student seminar was held during the 2013 spring semester, from January 16th to May 24th, at the Department of Computer Science of the University of Helsinki. There were a total of 16 papers in the seminar of which 11 were selected for the proceedings based on the suitability to the three themes. In some cases, papers were excluded in order to be published elsewhere. A full list of all the seminar papers can be found from the appendix. We wish you to have an interesting and enjoyable reading experience with the proceedings

    Smooth operations for large stateful in-memory database application: Using Kubernetes orchestration and Apache Helix for improving operations

    Get PDF
    Relex Solutions’ Plan product is architecturally a giant stateful monolith with an in-memory database. A system is considered a monolith if all its services need to be deployed together. The database has been kept in-memory because of the data amount the application needs to process and how much faster the performance is when the data is kept in-memory. The Plan architects are looking into taking Kubernetes as an orchestration and lifecycle managing tool. Having an orchestrator in place would provide several benefits, such as automatic scheduling workloads onto a shared pool of resources and better isolation between customers. Kuber-netes orchestration is part of bigger architecture initiative to modularize Relex Plan more in attempts to make the monolith more flexible. This thesis is about finding solutions for keeping the operations smooth with Kubernetes and Apache Helix. Literature review and design sci-ence will be used as main methodologies for the research. With Helix role rebalancer and Kubernetes’ Statefulset, we can easily scale out and scale in with graceful shutdown. Autoscaling would be well supported by having a resource pool in Kubernetes. Creating pods with Statefulset, make sure each of the pods has a persistent iden-tifier, so rescheduling and restoring pods in Kubernetes native way is covered, while Helix rebalancer takes care that the cluster has wanted number of Plan roles, so there’s minimal interruption to the users. Zero downtime would require backwards compatibility for database schema updates, this must be implemented on the product side. The backwards compatibility would technically be a requirement if Kubernetes-native rolling update deployment strategy, with zero downtime, is wanted to take into use in the future. The solution can be applied to other monolithic software architecture with similar setup

    The long sale: future-setting strategies for enterprise technologies

    Get PDF
    Markets for enterprise technologies are complex socio-technical arrangements where the nature of the goods or services available for exchange is frequently uncertain. Early offerings may appear obfuscated, in part ontologically due to contested boundary definitions, and in part through the intentional and unintentional work of sales actors. While it is difficult for actors to know what they are transacting with certainty before an exchange occurs, expectations are partly shaped in practice during a protracted and multipartite sales process. In the early stages, such technologies may be nothing more than ‘slideware’ or ‘vapourware’, with the promise of the offering yet to be realised. Suppliers are therefore faced with the challenge of how to bring an immature product to the serious attention of users. One such example which has dominated the ICT landscape in recent times is ‘cloud computing’, a vision for on-demand utility computing which on the one hand promised computing resources accessible like an infrastructure commodity such as electricity, but on the other declared by some as simply everything we already do in computing today. This thesis offers a longitudinal case study of the way in which a major ICT supplier, IBM, attempted to galvanise the market for its cloud-enabled products amongst user organisations. In doing so the supplier had the challenge of selling a model of outsourced services to organisations with deeply embedded ICT systems around which the sales processes had to be made to fit. The research centers on four empirical chapters which bring together contextual narratives of cloud computing, findings related to the sales work users do, the sales challenges encountered during crisis management, and the shadow activity that occurs during professional user groups and conferences. The discussion explains how actors work together to construct an imagined community of technology artefacts and practices that extends our understanding of how technology constituencies hold together without overt forms of control. The study draws together a number of years of fieldwork investigating user group events in the corporate ICT arena and a major UK customer implementation. These are explored through a mobile ethnography under the banner of a Biography of Artefacts and Practices (Pollock & Williams, 2008) making use of participant observation, and selective interviewing, with a particular focus on naturally occurring data

    Reaching High Availability in Connected Car Backend Applications

    Get PDF
    The connected car segment has high demands on the exchange of data between the car on the road, and a variety of services in the backend. By the end of 2020, connected services will be mainstream automotive offerings, according to Telefónica - Connected Car Industry Report 2014 the overall number of vehicles with built-in internet connectivity will increase from 10% of the overall market today to 90% by the end of the decade [1]. Connected car solutions will soon become one of the major business drivers for the industry; they already have a significant impact on existing solutions development and aftersales market. It has been more than three decades since the introduction of the first software component in cars, and since then a vast amount of different services has been introduced, creating an ecosystem of complex applications, architectures, and platforms. The complexity of the connected car ecosystem results into a range of new challenges. The backend applications must be scalable and flexible enough to accommodate loads created by the random user and device behavior. To deliver superior uptime, back-end systems must be highly integrated and automated to guarantee lowest possible failure rate, high availability, and fastest time-to-market. Connected car services increasingly rely on cloud-based service delivery models for improving user experiences and enhancing features for millions of vehicles and their users on a daily basis. Nowadays, the software applications become more complex, and the number of components that are involved and interact with each other is extremely large. In such systems, if a fault occurs, it can easily propagate and can affect other components resulting in a complex problem which is difficult to detect and debugg, therefore a robust and resilient architecture is needed which ensures the continuous availability of system in the wake of component failures, making the overall system highly available. The goal of the thesis is to gain insight into the development of highly available applications and to explore the area of fault tolerance. This thesis outlines different design patterns and describes the capabilities of fault tolerance libraries for Java platform, and design the most appropriate solution for developing a highly available application and evaluate the behavior with stress and load testing using Chaos Monkey methodologies
    • …
    corecore