2,470 research outputs found

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    A MATHEMATICAL FRAMEWORK FOR OPTIMIZING DISASTER RELIEF LOGISTICS

    Get PDF
    In today's society that disasters seem to be striking all corners of the globe, the importance of emergency management is undeniable. Much human loss and unnecessary destruction of infrastructure can be avoided with better planning and foresight. When a disaster strikes, various aid organizations often face significant problems of transporting large amounts of many different commodities including food, clothing, medicine, medical supplies, machinery, and personnel from several points of origin to a number of destinations in the disaster areas. The transportation of supplies and relief personnel must be done quickly and efficiently to maximize the survival rate of the affected population. The goal of this research is to develop a comprehensive model that describes the integrated logistics operations in response to natural disasters at the operational level. The proposed mathematical model integrates three main components. First, it controls the flow of several relief commodities from sources through the supply chain until they are delivered to the hands of recipients. Second, it considers a large-scale unconventional vehicle routing problem with mixed pickup and delivery schedules for multiple transportation modes. And third, following FEMA's complex logistics structure, a special facility location problem is considered that involves four layers of temporary facilities at the federal and state levels. Such integrated model provides the opportunity for a centralized operation plan that can effectively eliminate delays and assign the limited resources in a way that is optimal for the entire system. The proposed model is a large-scale mixed integer program. To solve the model, two sets of heuristic algorithms are proposed. For solving the multi-echelon facility location problem, four heuristic approaches are proposed. Also four heuristic algorithms are proposed to solve the general integer vehicle routing problem. Overall, the proposed heuristics could efficiently find optimal or near optimal solution in minutes of CPU time where solving the same problems with a commercial solver needed hours of computation time. Numerical case studies and extensive sensitivity analysis are conducted to evaluate the properties of the model and solution algorithms. The numerical analysis indicated the capabilities of the model to handle large-scale relief operations with adequate details. Solution algorithms were tested for several random generated cases and showed robustness in solution quality as well as computation time

    Modeling High-throughput Applications for in situ Analytics

    Get PDF
    International audienceWith the goal of performing exascale computing, the importance of I/Omanagement becomes more and more critical to maintain system performance.While the computing capacities of machines are getting higher, the I/O capa-bilities of systems do not increase as fast. We are able to generate more databut unable to manage them eciently due to variability of I/O performance.Limiting the requests to the Parallel File System (PFS) becomes necessary. Toaddress this issue, new strategies are being developed such as online in situanalysis. The idea is to overcome the limitations of basic post-mortem dataanalysis where the data have to be stored on PFS rst and processed later.There are several software solutions that allow users to specically dedicatenodes for analysis of data and distribute the computation tasks over dier-ent sets of nodes. Thus far, they rely on a manual resource partitioning andallocation by the user of tasks (simulations, analysis).In this work, we propose a memory-constraint modelization for in situ anal-ysis. We use this model to provide dierent scheduling policies to determineboth the number of resources that should be dedicated to analysis functions,and that schedule eciently these functions. We evaluate them and show theimportance of considering memory constraints in the model. Finally, we discussthe dierent challenges that have to be addressed in order to build automatictools for in situ analytics

    Horizontale en verticale samenwerking in distributieketens met cross-docks

    Get PDF

    Horizontale en verticale samenwerking in distributieketens met cross-docks

    Get PDF

    Development of a quality assurance prototype for intrusion detection systems

    Get PDF
    Thesis (Master)-- Izmir Institute of Technology, Computer Engineering, Izmir, 2002Includes bibliographical references (leaves: 75-79)Text in English; Abstract: Turkish and Englishix, 97 leavesQuality assurance is an essential activity for any business interacting with consumers. There are considerable number of projects going on to develop intrusion detection systems (IDSs). However, efforts to establish standards and practices to ensure the quality of such systems are comparatively less significant. The quality assurance activities for IDSs should ensure the conformance of explicitly stated functional and performance requirements as well as implicit characteristics that are expected from information security tools. This dissertation establishes guidelines to review, evaluate and possibly to develop an IDS. To establish guidelines, generic IDS and software requirements, software quality factors and design principles are used which are available in related literature and these requirements are presented both on developed generic IDS model and in Common Criteria Protection Profile format. First, the guidelines are developed, then they are implemented on a specific IDS product evaluation

    Master of Science

    Get PDF
    thesisRecent advancements in High Performance Computing (HPC) infrastructure with tradi- tional computing systems augmented with accelerators like graphic processing units (GPUs) and coprocessors like Intel Xeon Phi have successfully enabled predictive simulations specifi- cally Computational Fluid Dynamics (CFD) with more accuracy and speed. One of the most significant challenges in high-performance computing is to provide a software framework that can scale efficiently and minimize rewriting code to support diverse hardware configurations. Algorithms and framework support have been developed to deal with complexities and provide abstractions for a task to be compatible with various hardware targets. Software is written in C++ and represented as a Directed Acyclic Graph (DAG) with nodes that implement actual mathematical calculations. This thesis will present an improved approach for scheduling and execution of computational tasks within a heterogeneous CPU-GPU com- puting system insulting application developers with the inherent complexity in parallelism. The details will be presented within a context to facilitate the solution of partial differential equations on large clusters using graph theory
    corecore