44,262 research outputs found

    E-Fulfillment and Multi-Channel Distribution – A Review

    Get PDF
    This review addresses the specific supply chain management issues of Internet fulfillment in a multi-channel environment. It provides a systematic overview of managerial planning tasks and reviews corresponding quantitative models. In this way, we aim to enhance the understanding of multi-channel e-fulfillment and to identify gaps between relevant managerial issues and academic literature, thereby indicating directions for future research. One of the recurrent patterns in today’s e-commerce operations is the combination of ‘bricks-and-clicks’, the integration of e-fulfillment into a portfolio of multiple alternative distribution channels. From a supply chain management perspective, multi-channel distribution provides opportunities for serving different customer segments, creating synergies, and exploiting economies of scale. However, in order to successfully exploit these opportunities companies need to master novel challenges. In particular, the design of a multi-channel distribution system requires a constant trade-off between process integration and separation across multiple channels. In addition, sales and operations decisions are ever more tightly intertwined as delivery and after-sales services are becoming key components of the product offering.Distribution;E-fulfillment;Literature Review;Online Retailing

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201

    Knowledge-Intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches

    Get PDF
    Engineering of knowledge-intensive processes (KiPs) is far from being mastered, since they are genuinely knowledge- and data-centric, and require substantial flexibility, at both design- and run-time. In this work, starting from a scientific literature analysis in the area of KiPs and from three real-world domains and application scenarios, we provide a precise characterization of KiPs. Furthermore, we devise some general requirements related to KiPs management and execution. Such requirements contribute to the definition of an evaluation framework to assess current system support for KiPs. To this end, we present a critical analysis on a number of existing process-oriented approaches by discussing their efficacy against the requirements

    Changeable Manufacturing on the Network Level

    Get PDF
    AbstractAgility in the sense of changeability on the manufacturing network level, and here especially the business perspective, has received less attention than the other dimensions of changeability on the lower production levels, as well as in relation to the technological perspective. The present paper aims to enrich the concept of agility in the aforementioned sense, taking strategic management concepts into account that have so far received less attention in relation to changeability. Namely, we consider the concepts of lead factory, capacity pooling and allying, operational flexibility, and the concept of combining products, services and software as fruitful enrichments of the umbrella concept changeability. In so doing, we highlight interdependencies between agility and the analyzed concepts as well as the other changeability dimensions on the lower production level of factories or sites. On this basis, we formulate six hypotheses in consideration of the presented theoretical derivations. Hence, the methodological approach of our research is conceptual and hypothesis generating. Our work is supposed to build the basis for continuative conceptual and empirical research on agility

    Maintaining Replica Consistency Over Large-Scale Data Grid Using Update Propagation Technique

    Get PDF
    A Data Grid is an organized collection of nodes in a wide area network which contributes to various computation, storage data, and application. In Data Grid high numbers of users are distributed in a wide area environment which is dynamic and heterogeneous. Data management is one of the current issues where data transparency, consistency, fault-tolerance, automatic management and the performance are the user parameters in grid environment. Data management techniques must scale up while addressing autonomy, dynamicity and heterogeneity of the data resource. Data replication is a well known technique used to reduce accesses latency, improve availability and performance in a distributed computing environment. Replication introduces the problem of maintaining consistency among the replicas when files are allowed to be updated. The update information should be propagated to all replicas to guarantee correct read of the remote replicas. An asynchronous replication is a commonly agreed solution for the problem in consistency of replicas. A few studies have been done to maintain replica consistency in Data Grid. However, the introduced techniques are neither efficient nor scalable. They cannot be used in real Data Grid since the issues of large number of replica sites, large scale distribution, load balancing and site autonomy where the capability of grid site to join and leave the grid community at any time have not been addressed. This thesis proposes a new asynchronous replication protocol called Update Propagation Grid (UPG) to maintain replica consistency over a large scale data grid. In UPG the updates reach all on-line secondary replicas using a propagation technique based on nodes organized into a logical structure network in the form of two-dimensional grid structure. The proposed update propagation technique is a hybrid push-pull and dynamic technique that addresses the issues of site autonomy, efficiency, scalability, load balancing and fairness. A two performance analysis studies have been conducted to study the performance of the proposed technique in comparison with other techniques. First study involves mathematical and simulation analysis. Second study is based on Queuing Network Model. The result of the performance analysis shows that the proposed technique scales well with high number of replica sites and with high request loads. The result also shows the reduction on the average update reach time by 5% to 97%. Moreover the result shows that the proposed technique is capable of reaching load balancing while providing update propagation fairnes

    Real-Time Data Processing With Lambda Architecture

    Get PDF
    Data has evolved immensely in recent years, in type, volume and velocity. There are several frameworks to handle the big data applications. The project focuses on the Lambda Architecture proposed by Marz and its application to obtain real-time data processing. The architecture is a solution that unites the benefits of the batch and stream processing techniques. Data can be historically processed with high precision and involved algorithms without loss of short-term information, alerts and insights. Lambda Architecture has an ability to serve a wide range of use cases and workloads that withstands hardware and human mistakes. The layered architecture enhances loose coupling and flexibility in the system. This a huge benefit that allows understanding the trade-offs and application of various tools and technologies across the layers. There has been an advancement in the approach of building the LA due to improvements in the underlying tools. The project demonstrates a simplified architecture for the LA that is maintainable

    Agent-based techniques for National Infrastructure Simulation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.Includes bibliographical references (leaves 35-37).Modern society is dependent upon its networks of infrastructure. These networks have grown in size and complexity to become interdependent, creating within them hidden vulnerabilities. The critical nature of these infrastructures has led to the establishment of the National Infrastructure Simulation and Analysis Center (NISAC) by the United States Government. The goal of NISAC is to provide the simulation capability to understand infrastructure interdependencies, detect vulnerabilities, and provide infrastructure planning and crises response assistance. This thesis examines recent techniques for simulation and analyzes their suitability for the national infrastructure simulation problem. Variable and agent-based simulation models are described and compared. The bottom-up approach of the agent-based model is found to be more suitable than the top-down approach of the variable-based model. Supercomputer and distributed, or grid computing solutions are explored. Both are found to be valid solutions and have complimentary strengths. Software architectures for implementation such as the traditional object-oriented approach and the web service model are examined. Solutions to meet NISAC objectives using the agent-based simulation model implemented with web services and a combination of hardware configurations are proposed.by Kenny Lin.S.M

    Progress in Material Handling Research: 2016

    Get PDF
    Table of contents
    corecore