168,984 research outputs found

    Everyone is Different! Exploring the Issues and Problems with ERP Enabled Shared Service Initiatives

    Get PDF
    In today’s increasingly competitive environment, there is constant pressure for corporate leaders to add value to their organizations. These contemporary organizations are increasingly moving into business models that attempt to reduce duplicate supporting processes and staff by streamlining business processes that are not central to the organization’s operations and concentrating on strategies on strategic or core, business processes. This concept, known as Shared Services, attempts to bundle some of the supporting processes and non-strategic activities into a separate organization, which in turn treats those processes and activities as the core of its own business. Shared Services consolidate and support redundant functions, such as accounts payable and procurement, for disparate business units. By leveraging economies of scale from a common IT infrastructure, such a group is able to market specific services to business units. Many organizations are employing Enterprise Resource Planning (ERP) systems, for example SAP, to facilitate Shared Service initiatives by aggregating backroom functionality across departments. This research-in-progress paper investigates issues and problems with ERP enabled Shared Services in 19 organizations. The results reveal five main issues that organizations face in implementing a Shared Services initiative

    A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters

    Full text link
    Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to superscheduling in a grid environment are non-coordinated since application level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to suboptimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called \emph{Grid-Federation}, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the QoS based scheduling, but also enhances utility delivered by resources.Comment: 22 pages, extended version of the conference paper published at IEEE Cluster'05, Boston, M

    Evaluation of the ICT Test Bed Project : the qualitative report

    Get PDF

    Industrial districts as organizational environments: resources, networks and structures

    Get PDF
    The paper combines economic and sociological perspectives on organizations in order to gain a better understanding of the forces shaping the structures of industrial districts (IDs) and the organizations of which they are constituted. To effect the combination , the resource based view (RBV) and resource dependency theory are combined to explain the evolution of different industry structures. The paper thus extends work by Toms and Filatotchev by spatializing consideration of resource distribution and resource dependence. The paper has important implications for conventional interpretations in the fields of business and organizational history and for the main areas of theory hitherto considered separately, particularly the Chandlerian model of corporate hierarchy as contrasted with the alternative of clusters of small firms coordinated by networks

    HIL: designing an exokernel for the data center

    Full text link
    We propose a new Exokernel-like layer to allow mutually untrusting physically deployed services to efficiently share the resources of a data center. We believe that such a layer offers not only efficiency gains, but may also enable new economic models, new applications, and new security-sensitive uses. A prototype (currently in active use) demonstrates that the proposed layer is viable, and can support a variety of existing provisioning tools and use cases.Partial support for this work was provided by the MassTech Collaborative Research Matching Grant Program, National Science Foundation awards 1347525 and 1149232 as well as the several commercial partners of the Massachusetts Open Cloud who may be found at http://www.massopencloud.or

    Desegregating HRM: A Review and Synthesis of Micro and Macro Human Resource Management Research

    Get PDF
    Since the early 1980’s the field of HRM has seen the independent evolution of two independent subfields (strategic and functional), which we believe is dysfunctional to the field as a whole. We propose a typology of HRM research based on two dimensions: Level of analysis (individual/ group or organization) and number of practices (single or multiple). We use this framework to review the recent research in each of the four sub-areas. We argue that while significant progress has been made within each area, the potential for greater gains exists by looking across each area. Toward this end we suggest some future research directions based on a more integrative view of HRM. We believe that both areas can contribute significantly to each other resulting in a more profound impact on the field of HRM than each can contribute independently

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure
    corecore