41,090 research outputs found

    Designing Scalable Business Models

    Full text link
    Digital business models are often designed for rapid growth, and some relatively young companies have indeed achieved global scale. However despite the visibility and importance of this phenomenon, analysis of scale and scalability remains underdeveloped in management literature. When it is addressed, analysis of this phenomenon is often over-influenced by arguments about economies of scale in production and distribution. To redress this omission, this paper draws on economic, organization and technology management literature to provide a detailed examination of the sources of scaling in digital businesses. We propose three mechanisms by which digital business models attempt to gain scale: engaging both non- paying users and paying customers; organizing customer engagement to allow self- customization; and orchestrating networked value chains, such as platforms or multi-sided business models. Scaling conditions are discussed, and propositions developed and illustrated with examples of big data entrepreneurial firms

    An engineering approach to business model experimentation – an online investment research startup case study

    Get PDF
    Every organization needs a viable business model. Strikingly, most of current literature is focused on business model design, whereas there is almost no attention for business model validation and implementation and related business model experimentation. The goal of the research as described in this paper is to develop a business model engineering tool for supporting business model management as a continuous design, validation and implementation cycle. The tool is applied to an online investment research startup in roll out and market phase. This paper describes the research as performed in a case study setting by focusing on the design, implementation and evaluation of the business model engineering tool. We also analyze the actual implementation and usage of the business model tool by the online investment research startup by focusing on the most critical actions related to actual business model implementation – i.e. actions with so-called ‘Lollapalooza tendencies’

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    A FUNCTIONAL SKETCH FOR RESOURCES MANAGEMENT IN COLLABORATIVE SYSTEMS FOR BUSINESS

    Get PDF
    This paper presents a functional design sketch for the resource management module of a highly scalable collaborative system. Small and medium enterprises require such tools in order to benefit from and develop innovative business ideas and technologies. As computing power is a modern increasing demand and no easy and cheap solutions are defined, especially small companies or emerging business projects abide a more accessible alternative. Our work targets to settle a model for how P2P architecture can be used as infrastructure for a collaborative system that delivers resource access services. We are focused on finding a workable collaborative strategy between peers so that the system offers a cheap, trustable and quality service. Thus, in this phase we are not concerned about solutions for a specific type of task to be executed by peers, but only considering CPU power as resource. This work concerns the resource management module as a part of a larger project in which we aim to build a collaborative system for businesses with important resource demandsresource management, p2p, open-systems, service oriented computing, collaborative systems

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Architecture for Analysis of Streaming Data

    Full text link
    While several attempts have been made to construct a scalable and flexible architecture for analysis of streaming data, no general model to tackle this task exists. Thus, our goal is to build a scalable and maintainable architecture for performing analytics on streaming data. To reach this goal, we introduce a 7-layered architecture consisting of microservices and publish-subscribe software. Our study shows that this architecture yields a good balance between scalability and maintainability due to high cohesion and low coupling of the solution, as well as asynchronous communication between the layers. This architecture can help practitioners to improve their analytic solutions. It is also of interest to academics, as it is a building block for a general architecture for processing streaming data
    • 

    corecore