719 research outputs found

    A review of parallel computing for large-scale remote sensing image mosaicking

    Get PDF
    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further research of image mosaicking parallelism on a large scale. This paper provides a perspective on the current state of image mosaicking parallelization for large scale applications. We firstly introduce the motivation of image mosaicking parallel for large scale application, and analyze the difficulty and problem of parallel image mosaicking at large scale such as scheduling with huge number of dependent tasks, programming with multiple-step procedure, dealing with frequent I/O operation. Then we summarize the existing studies of parallel computing in image mosaicking for large scale applications with respect to problem decomposition and parallel strategy, parallel architecture, task schedule strategy and implementation of image mosaicking parallelization. Finally, the key problems and future potential research directions for image mosaicking are addressed

    Towards an open cloud marketplace: vision and first steps

    Full text link
    As one of the most promising, emerging concepts in Information Technology (IT), cloud computing is transforming how IT is consumed and managed; yielding improved cost efficiencies, and delivering flexible, on-demand scalability by reducing computing infrastructures, platforms, and services to commodities acquired and paid-for on-demand through a set of cloud providers. Today, the transition of cloud computing from a subject of research and innovation to a critical infrastructure is proceeding at an incredibly fast pace. A potentially dangerous consequence of this speedy transition to practice is the premature adoption, and ossification, of the models, technologies, and standards underlying this critical infrastructure. This state of affairs is exacerbated by the fact that innovative research on production-scale platforms is becoming the purview of a small number of public cloud providers. Specifically, the academic research communities are effectively excluded from the opportunity to contribute meaningfully to the evolution not to mention innovation and healthy mutation of cloud computing technologies. As the dependence on our society and economy on cloud computing increases, so does the realization that the academic research community cannot be shut out from contributing to the design and evolution of this critical infrastructure. In this article we provide an alternative vision that of an Open Cloud eXchange (OCX) a public cloud marketplace, where many stakeholders, rather than just a single cloud provider, participate in implementing and operating the cloud, thus creating an ecosystem that will bring the innovation of a broader community to bear on a much healthier and more efficient cloud marketplace

    Native structure-based modeling and simulation of biomolecular systems per mouse click

    Get PDF
    Background Molecular dynamics (MD) simulations provide valuable insight into biomolecular systems at the atomic level. Notwithstanding the ever-increasing power of high performance computers current MD simulations face several challenges: the fastest atomic movements require time steps of a few femtoseconds which are small compared to biomolecular relevant timescales of milliseconds or even seconds for large conformational motions. At the same time, scalability to a large number of cores is limited mostly due to long-range interactions. An appealing alternative to atomic-level simulations is coarse-graining the resolution of the system or reducing the complexity of the Hamiltonian to improve sampling while decreasing computational costs. Native structure-based models, also called Gō-type models, are based on energy landscape theory and the principle of minimal frustration. They have been tremendously successful in explaining fundamental questions of, e.g., protein folding, RNA folding or protein function. At the same time, they are computationally sufficiently inexpensive to run complex simulations on smaller computing systems or even commodity hardware. Still, their setup and evaluation is quite complex even though sophisticated software packages support their realization. Results Here, we establish an efficient infrastructure for native structure-based models to support the community and enable high-throughput simulations on remote computing resources via GridBeans and UNICORE middleware. This infrastructure organizes the setup of such simulations resulting in increased comparability of simulation results. At the same time, complete workflows for advanced simulation protocols can be established and managed on remote resources by a graphical interface which increases reusability of protocols and additionally lowers the entry barrier into such simulations for, e.g., experimental scientists who want to compare their results against simulations. We demonstrate the power of this approach by illustrating it for protein folding simulations for a range of proteins. Conclusions We present software enhancing the entire workflow for native structure-based simulations including exception-handling and evaluations. Extending the capability and improving the accessibility of existing simulation packages the software goes beyond the state of the art in the domain of biomolecular simulations. Thus we expect that it will stimulate more individuals from the community to employ more confidently modeling in their research

    Evolutionary Strategic Management based on Organisational Community Ecology: An Example from Saudi Real Estate

    Get PDF
    Saudi Arabia’s Vision 2030 program dramatically, changes the organisational landscape for organisations. One of the programs aims at increasing the pilgrim numbers to 30 million by 2030. This holds significant implications for hotels and retail organisation located in Makkah, Saudi Arabia. Our research aims to quantify existing organisational selection measures in the presence of growing resource numbers, with the fundamental question: does selection take place even in an environment of rising resources? We utilise organisation ecology, a well-known theoretical framework to analyse organisation-environmental relationships. It states that organisations are “selected” for removal upon the organisation achieving environmental non-alignment. Historically, this knowledge body has demonstrated the impact of selection due to changes in the understanding of organisation categories (organisation forms), change in number of organisations in a category (density-dependence), market partitioning, and impact of organisation age. Our research applied the same theoretical fragments amongst dissimilar organisations and a religious environment, bringing novelty in the existing organisation ecology research, to identify the selection pressures faced by such organisations. Our research used data from private (Knight Frank and STR) and public sources (Saudi Arabian Government Bodies) to develop an understanding on Makkah's hotela and retail organisations to research nine (9) hypotheses to explicate issues pertaining to organisational schematisation, vital rates, appeal structures, organisational diversity, and niche structures. Our researched yielded some interesting results. Namely, the social schematisation is sensitive to hotel star ratings and not branding structures. The retail structure does not experience any schematisation selection pressure. 5-star hotels prefer to setup within proximity to the Grand Mosque, with 4-Star hotels demonstrating an elevated mortality hazard within the population, but experience increasing founding rates with distance increases. In terms of density-dependence, the hotel population is undergoing legitimation, but have interesting sub-population dynamics. Branded hotels face elevated mortality hazards as pilgrims have choice, but the competition within the unbranded hotel category improves their life chances. Perhaps, the most interesting finding within our research context is the interrelationship of founding events between hotel and retail populations. We observed once a hotel is founded, within 1-year we see a retail founding leading to another hotel being founded within 2 years. Lastly, our research observed generalists fair better in comparison to the specialists identity

    Evaluating demand response opportunities for data centers

    Full text link
    Data center demand response is a solution to a problem that is just recently emerging: Today's energy system is undergoing major transformations due to the increasing shares of intermittent renewable power sources as solar and wind. As the power grid physically requires balancing power feed-in and power draw at all times, traditionally, power generation plants with short ramp-up times were activated to avoid grid imbalances. Additionally, through demand response schemes power consumers can be incentivized to manipulate their planned power profile in order to activate hidden sources of flexibility. The data center industry has been identified as a suitable candidate for demand response as it is continuously growing and relies on highly automated processes. Technically, data centers can provide flexibility by, amongst others, temporally or geographically shifting their workload or shutting down servers. There is a large body of work that analyses the potential of data center demand response. Most of these, however, deal with very specific data center set-ups in very specific power flexibility markets, so that the external validity is limited. The presented thesis exceeds the related work creating a framework for modeling data center demand response on a high level of abstraction that allows subsuming a great variety of specific models in the area: Based on a generic architecture of demand response enabled data centers this is formalized through a micro-economics inspired optimization framework by generating technical power flex functions and an associated cost and market skeleton. As part of a two-step-evaluation an architectural framework for simulating demand response is created. Subsequently, a simulation instance of this high-level architecture is developed for a specific HPC data center in Germany implementing two power management strategies, namely temporally shifting workload and manipulating CPU frequency. The flexibility extracted is then monetized on the secondary reserve market and on the EPEX day ahead market in Germany. As a result, in 2014 this data center might have achieved the largest benefit gain by changing from static electricity pricing to dynamic EPEX prices without changing their power profile. Through demand response they might have created an additional gross benefit of 4 of the power bill on the secondary reserve market. In a sensitivity analysis, however, it could be shown that these results are largely dependent on specific parameters as service level agreements and job heterogeneity. The results show that even though concrete simulations help at understanding demand response with individual data centers, the modeling framework is needed to understand their relevance from a system-wide viewpoint

    Resource Management Policies for Cloud-based Interactive 3D Applications

    Get PDF
    The increasing interest for the cloud computing paradigm is leading several different applications and services moving to the 'cloud'. Those range from general storage and computing services to document management systems and office applications. A new challenge is the migration to the cloud of interactive 3D applications, especially those designed for professional usage (e.g., scientific data visualizers, CAD instruments, 3D medical modeling applications). Among the several hurdles rising from some specific hardware and software requirements, an important issue to address is the definition of novel management policies that can properly support these applications, namely, that ensure efficient resource utilization together with a sufficient quality perceived by users. This paper presents some preliminary results in this direction and discusses some possible future work in this field. Our work is part of a wider project aiming at developing a complete architecture to offer interactive 3D applications in a cloud computing environment. Hence, we refer to this particular solution in this stud

    Coordinating Resource Use in Open Distributed Systems

    Get PDF
    In an open distributed system, computational resources are peer-owned, and distributed over time and space. The system is open to interactions with its environment, and the resources can dynamically join or leave the system, or can be discovered at runtime. This dynamicity leads to opportunities to carry out computations without statically owned resources, harnessing the collective compute power of the resources connected by the Internet. However, realizing this potential requires efficient and scalable resource discovery, coordination, and control, which present challenges in a dynamic, open environment. In this thesis, I present an approach to address these challenges by separating the functionality concerns of concurrent computations from those of coordinating their resource use, with the purpose of reducing programming complexity, and aiding development of correct, efficient, and resource-aware concurrent programs. As a first step towards effectively coordinating distributed resources, I developed DREAM, a Distributed Resource Estimation and Allocation Model, which enables computations to reason about future availability of resources. I then developed a fine-grained resource coordination scheme for distributed computations. The coordination scheme integrates DREAM-based resource reasoning into a distributed scheduler, for deciding and enforcing fine-grained resource-use schedules for distributed computations. To control the overhead caused by the coordination, a tuner is implemented which explicitly balances the overhead of the control mechanisms against the extent of control exercised. The effectiveness and performance of the resource coordination approach have been evaluated using a number of case studies. Experimental results show that the approach can effectively schedule computations for supporting various types of coordination objectives, such as ensuring Quality-of-Service, power-efficient execution, and dynamic load balancing. The overhead caused by the coordination mechanism is relatively modest, and adjustable through the tuner. In addition, the coordination mechanism does not add extra programming complexity to computations

    A reference model for integrated energy and power management of HPC systems

    Get PDF
    Optimizing a computer for highest performance dictates the efficient use of its limited resources. Computers as a whole are rather complex. Therefore, it is not sufficient to consider optimizing hardware and software components independently. Instead, a holistic view to manage the interactions of all components is essential to achieve system-wide efficiency. For High Performance Computing (HPC) systems, today, the major limiting resources are energy and power. The hardware mechanisms to measure and control energy and power are exposed to software. The software systems using these mechanisms range from firmware, operating system, system software to tools and applications. Efforts to improve energy and power efficiency of HPC systems and the infrastructure of HPC centers achieve perpetual advances. In isolation, these efforts are unable to cope with the rising energy and power demands of large scale systems. A systematic way to integrate multiple optimization strategies, which build on complementary, interacting hardware and software systems is missing. This work provides a reference model for integrated energy and power management of HPC systems: the Open Integrated Energy and Power (OIEP) reference model. The goal is to enable the implementation, setup, and maintenance of modular system-wide energy and power management solutions. The proposed model goes beyond current practices, which focus on individual HPC centers or implementations, in that it allows to universally describe any hierarchical energy and power management systems with a multitude of requirements. The model builds solid foundations to be understandable and verifiable, to guarantee stable interaction of hardware and software components, for a known and trusted chain of command. This work identifies the main building blocks of the OIEP reference model, describes their abstract setup, and shows concrete instances thereof. A principal aspect is how the individual components are connected, interface in a hierarchical manner and thus can optimize for the global policy, pursued as a computing center's operating strategy. In addition to the reference model itself, a method for applying the reference model is presented. This method is used to show the practicality of the reference model and its application. For future research in energy and power management of HPC systems, the OIEP reference model forms a cornerstone to realize --- plan, develop and integrate --- innovative energy and power management solutions. For HPC systems themselves, it supports to transparently manage current systems with their inherent complexity, it allows to integrate novel solutions into existing setups, and it enables to design new systems from scratch. In fact, the OIEP reference model represents a basis for holistic efficient optimization.Computer auf höchstmögliche Rechenleistung zu optimieren bedingt Effizienzmaximierung aller limitierenden Ressourcen. Computer sind komplexe Systeme. Deshalb ist es nicht ausreichend, Hardware und Software isoliert zu betrachten. Stattdessen ist eine Gesamtsicht des Systems notwendig, um die Interaktionen aller Einzelkomponenten zu organisieren und systemweite Optimierungen zu ermöglichen. Für Höchstleistungsrechner (HLR) ist die limitierende Ressource heute ihre Leistungsaufnahme und der resultierende Gesamtenergieverbrauch. In aktuellen HLR-Systemen sind Energie- und Leistungsaufnahme programmatisch auslesbar als auch direkt und indirekt steuerbar. Diese Mechanismen werden in diversen Softwarekomponenten von Firmware, Betriebssystem, Systemsoftware bis hin zu Werkzeugen und Anwendungen genutzt und stetig weiterentwickelt. Durch die Komplexität der interagierenden Systeme ist eine systematische Optimierung des Gesamtsystems nur schwer durchführbar, als auch nachvollziehbar. Ein methodisches Vorgehen zur Integration verschiedener Optimierungsansätze, die auf komplementäre, interagierende Hardware- und Softwaresysteme aufbauen, fehlt. Diese Arbeit beschreibt ein Referenzmodell für integriertes Energie- und Leistungsmanagement von HLR-Systemen, das „Open Integrated Energy and Power (OIEP)“ Referenzmodell. Das Ziel ist ein Referenzmodell, dass die Entwicklung von modularen, systemweiten energie- und leistungsoptimierenden Sofware-Verbunden ermöglicht und diese als allgemeines hierarchisches Managementsystem beschreibt. Dies hebt das Modell von bisherigen Ansätzen ab, welche sich auf Einzellösungen, spezifischen Software oder die Bedürfnisse einzelner Rechenzentren beschränken. Dazu beschreibt es Grundlagen für ein planbares und verifizierbares Gesamtsystem und erlaubt nachvollziehbares und sicheres Delegieren von Energie- und Leistungsmanagement an Untersysteme unter Aufrechterhaltung der Befehlskette. Die Arbeit liefert die Grundlagen des Referenzmodells. Hierbei werden die Einzelkomponenten der Software-Verbunde identifiziert, deren abstrakter Aufbau sowie konkrete Instanziierungen gezeigt. Spezielles Augenmerk liegt auf dem hierarchischen Aufbau und der resultierenden Interaktionen der Komponenten. Die allgemeine Beschreibung des Referenzmodells erlaubt den Entwurf von Systemarchitekturen, welche letztendlich die Effizienzmaximierung der Ressource Energie mit den gegebenen Mechanismen ganzheitlich umsetzen können. Hierfür wird ein Verfahren zur methodischen Anwendung des Referenzmodells beschrieben, welches die Modellierung beliebiger Energie- und Leistungsverwaltungssystemen ermöglicht. Für Forschung im Bereich des Energie- und Leistungsmanagement für HLR bildet das OIEP Referenzmodell Eckstein, um Planung, Entwicklung und Integration von innovativen Lösungen umzusetzen. Für die HLR-Systeme selbst unterstützt es nachvollziehbare Verwaltung der komplexen Systeme und bietet die Möglichkeit, neue Beschaffungen und Entwicklungen erfolgreich zu integrieren. Das OIEP Referenzmodell bietet somit ein Fundament für gesamtheitliche effiziente Systemoptimierung

    Adapting Datacenter Capacity for Greener Datacenters and Grid

    Full text link
    Cloud providers are adapting datacenter (DC) capacity to reduce carbon emissions. With hyperscale datacenters exceeding 100 MW individually, and in some grids exceeding 15% of power load, DC adaptation is large enough to harm power grid dynamics, increasing carbon emissions, power prices, or reduce grid reliability. To avoid harm, we explore coordination of DC capacity change varying scope in space and time. In space, coordination scope spans a single datacenter, a group of datacenters, and datacenters with the grid. In time, scope ranges from online to day-ahead. We also consider what DC and grid information is used (e.g. real-time and day-ahead average carbon, power price, and compute backlog). For example, in our proposed PlanShare scheme, each datacenter uses day-ahead information to create a capacity plan and shares it, allowing global grid optimization (over all loads, over entire day). We evaluate DC carbon emissions reduction. Results show that local coordination scope fails to reduce carbon emissions significantly (3.2%--5.4% reduction). Expanding coordination scope to a set of datacenters improves slightly (4.9%--7.3%). PlanShare, with grid-wide coordination and full-day capacity planning, performs the best. PlanShare reduces DC emissions by 11.6%--12.6%, 1.56x--1.26x better than the best local, online approach's results. PlanShare also achieves lower cost. We expect these advantages to increase as renewable generation in power grids increases. Further, a known full-day DC capacity plan provides a stable target for DC resource management.Comment: Published at e-Energy '23: Proceedings of the 14th ACM International Conference on Future Energy System
    • …
    corecore