1,801 research outputs found

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    The Country-specific Organizational and Information Architecture of ERP Systems at Globalised Enterprises

    Get PDF
    The competition on the market forces companies to adapt to the changing environment. Most recently, the economic and financial crisis has been accelerating the alteration of both business and IT models of enterprises. The forces of globalization and internationalization motivate the restructuring of business processes and consequently IT processes. To depict the changes in a unified framework, we need the concept of Enterprise Architecture as a theoretical approach that deals with various tiers, aspects and views of business processes and different layers of application, software and hardware systems. The paper outlines a wide-range theoretical background for analyzing the re-engineering and re-organization of ERP systems at international or transnational companies in the middle-sized EU member states. The research carried out up to now has unravelled the typical structural changes, the models for internal business networks and their modification that reflect the centralization, decentralization and hybrid approaches. Based on the results obtained recently, a future research program has been drawn up to deepen our understanding of the trends within the world of ERP systems.Information System; ERP; Enterprise Resource Planning; Enterprise Architecture; Globalization; Centralization; Decentralization; Hybrid

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    The science-policy interfaces of the European network for observing our changing planet : From Earth Observation data to policy-oriented decisions

    Get PDF
    This paper reports on major outcomes of the ERA-PLANET (The European network for observing our changing planet) project, which was funded under Horizon 2020 ERA-net co-funding scheme. ERA-PLANET strengthened the European Research Area in the domain of Earth Observation (EO) in coherence with the European partici-pation to Group on Earth Observation and the Copernicus European Union's Earth Observation programme. ERA -PLANET was implemented through four projects focused on smart cities and resilient societies (SMURBS), resource efficiency and environmental management (GEOEssential), global changes and environmental treaties (iGOSP) and polar areas and natural resources (iCUPE). These projects developed specific science-policy workflows and interfaces to address selected environmental policy issues and design cost-effective strategies aiming to achieve targeted objectives. Key Enabling Technologies were implemented to enhancing 'data to knowledge' transition for supporting environmental policy making. Data cube technologies, the Virtual Earth Laboratory, Earth Observation ontologies and Knowledge Platforms were developed and used for such applications.SMURBS brought a substantial contribution to resilient cities and human settlements topics that were adopted by GEO as its 4th engagement priority, bringing the urban resilience topic in the GEO agenda on par with climate change, sustainable development and disaster risk reduction linked to environmental policies. GEOEssential is contributing to the development of Essential Variables (EVs) concept, which is encouraging and should allow the EO community to complete the description of the Earth System with EVs in a close future. This will clearly improve our capacity to address intertwined environmental and development policies as a Nexus.iGOSP supports the implementation of the GEO Flagship on Mercury (GOS4M) and the GEO Initiative on POPs (GOS4POPs) by developing a new integrated approach for global real-time monitoring of environmental quality with respect to air, water and human matrices contamination by toxic substances, like mercury and persistent organic pollutants. iGOSP developed end-user-oriented Knowledge Hubs that provide data repository systems integrated with data management consoles and knowledge information systems.The main outcomes from iCUPE are the novel and comprehensive data sets and a modelling activity that contributed to delivering science-based insights for the Arctic region. Applications enable defining and moni-toring of Arctic Essential Variables and sets up processes towards UN2030 SDGs that include health (SDG 3), clean water resources and sanitation (SDGs 6 and 14).Peer reviewe

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time
    corecore