115,567 research outputs found

    The Country-specific Organizational and Information Architecture of ERP Systems at Globalised Enterprises

    Get PDF
    The competition on the market forces companies to adapt to the changing environment. Most recently, the economic and financial crisis has been accelerating the alteration of both business and IT models of enterprises. The forces of globalization and internationalization motivate the restructuring of business processes and consequently IT processes. To depict the changes in a unified framework, we need the concept of Enterprise Architecture as a theoretical approach that deals with various tiers, aspects and views of business processes and different layers of application, software and hardware systems. The paper outlines a wide-range theoretical background for analyzing the re-engineering and re-organization of ERP systems at international or transnational companies in the middle-sized EU member states. The research carried out up to now has unravelled the typical structural changes, the models for internal business networks and their modification that reflect the centralization, decentralization and hybrid approaches. Based on the results obtained recently, a future research program has been drawn up to deepen our understanding of the trends within the world of ERP systems.Information System; ERP; Enterprise Resource Planning; Enterprise Architecture; Globalization; Centralization; Decentralization; Hybrid

    A Reconfigurable Vector Instruction Processor for Accelerating a Convection Parametrization Model on FPGAs

    Full text link
    High Performance Computing (HPC) platforms allow scientists to model computationally intensive algorithms. HPC clusters increasingly use General-Purpose Graphics Processing Units (GPGPUs) as accelerators; FPGAs provide an attractive alternative to GPGPUs for use as co-processors, but they are still far from being mainstream due to a number of challenges faced when using FPGA-based platforms. Our research aims to make FPGA-based high performance computing more accessible to the scientific community. In this work we present the results of investigating the acceleration of a particular atmospheric model, Flexpart, on FPGAs. We focus on accelerating the most computationally intensive kernel from this model. The key contribution of our work is the architectural exploration we undertook to arrive at a solution that best exploits the parallelism available in the legacy code, and is also convenient to program, so that eventually the compilation of high-level legacy code to our architecture can be fully automated. We present the three different types of architecture, comparing their resource utilization and performance, and propose that an architecture where there are a number of computational cores, each built along the lines of a vector instruction processor, works best in this particular scenario, and is a promising candidate for a generic FPGA-based platform for scientific computation. We also present the results of experiments done with various configuration parameters of the proposed architecture, to show its utility in adapting to a range of scientific applications.Comment: This is an extended pre-print version of work that was presented at the international symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART2014), Sendai, Japan, June 911, 201

    Accelerating Training of Deep Neural Networks via Sparse Edge Processing

    Full text link
    We propose a reconfigurable hardware architecture for deep neural networks (DNNs) capable of online training and inference, which uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements. This novel architecture introduces the notion of edge-processing to provide flexibility and combines junction pipelining and operational parallelization to speed up training. The overall effect is to reduce network complexity by factors up to 30x and training time by up to 35x relative to GPUs, while maintaining high fidelity of inference results. This has the potential to enable extensive parameter searches and development of the largely unexplored theoretical foundation of DNNs. The architecture automatically adapts itself to different network sizes given available hardware resources. As proof of concept, we show results obtained for different bit widths.Comment: Presented at the 26th International Conference on Artificial Neural Networks (ICANN) 2017 in Alghero, Ital

    Accelerating positive change in e-records management : the AC+erm project at Northumbria

    Get PDF
    The AC+erm project aims to investigate and critically explore issues and practical strategies for accelerating positive change in electronic records management. The project’s focus is on designing an organisational-centred architecture from three perspectives: people, process and technology. This paper introduces the project, describes the methodology (a systematic literature review, e-Delphi studies and colloquia) and presents solutions for improving ERM developed from the people and process e-Delphi responses. ERM is particularly challenging and the solutions offered by the Delphi participants are numerous, and range in scale and complexity. The only firm conclusion that one can draw is that the majority of the solutions are people-focussed ones. The Cynefin framework is introduced as one approach for providing a conceptual overview to our findings on ERM. The sample solutions presented in this paper provide a toolkit of ‘probes’ and ‘interventions’ for practical application in organisations
    corecore